00:00:00.001 Started by upstream project "autotest-per-patch" build number 132826 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.095 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.096 The recommended git tool is: git 00:00:00.097 using credential 00000000-0000-0000-0000-000000000002 00:00:00.098 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.166 Fetching changes from the remote Git repository 00:00:00.168 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.243 Using shallow fetch with depth 1 00:00:00.243 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.243 > git --version # timeout=10 00:00:00.302 > git --version # 'git version 2.39.2' 00:00:00.302 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.335 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.335 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.923 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.936 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.949 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.949 > git config core.sparsecheckout # timeout=10 00:00:05.964 > git read-tree -mu HEAD # timeout=10 00:00:05.982 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.003 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.003 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.170 [Pipeline] Start of Pipeline 00:00:06.183 [Pipeline] library 00:00:06.185 Loading library shm_lib@master 00:00:06.185 Library shm_lib@master is cached. Copying from home. 00:00:06.203 [Pipeline] node 00:00:06.215 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.216 [Pipeline] { 00:00:06.225 [Pipeline] catchError 00:00:06.226 [Pipeline] { 00:00:06.239 [Pipeline] wrap 00:00:06.247 [Pipeline] { 00:00:06.256 [Pipeline] stage 00:00:06.257 [Pipeline] { (Prologue) 00:00:06.269 [Pipeline] echo 00:00:06.270 Node: VM-host-WFP1 00:00:06.274 [Pipeline] cleanWs 00:00:06.283 [WS-CLEANUP] Deleting project workspace... 00:00:06.283 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.293 [WS-CLEANUP] done 00:00:06.560 [Pipeline] setCustomBuildProperty 00:00:06.664 [Pipeline] httpRequest 00:00:07.094 [Pipeline] echo 00:00:07.096 Sorcerer 10.211.164.112 is alive 00:00:07.103 [Pipeline] retry 00:00:07.104 [Pipeline] { 00:00:07.116 [Pipeline] httpRequest 00:00:07.121 HttpMethod: GET 00:00:07.122 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.122 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.132 Response Code: HTTP/1.1 200 OK 00:00:07.133 Success: Status code 200 is in the accepted range: 200,404 00:00:07.133 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.043 [Pipeline] } 00:00:14.060 [Pipeline] // retry 00:00:14.068 [Pipeline] sh 00:00:14.351 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.367 [Pipeline] httpRequest 00:00:14.986 [Pipeline] echo 00:00:14.987 Sorcerer 10.211.164.112 is alive 00:00:14.997 [Pipeline] retry 00:00:14.999 [Pipeline] { 00:00:15.012 [Pipeline] httpRequest 00:00:15.017 HttpMethod: GET 00:00:15.017 URL: http://10.211.164.112/packages/spdk_52a4134875252629d5d87a15dc337c6bfe0b3746.tar.gz 00:00:15.018 Sending request to url: http://10.211.164.112/packages/spdk_52a4134875252629d5d87a15dc337c6bfe0b3746.tar.gz 00:00:15.044 Response Code: HTTP/1.1 200 OK 00:00:15.044 Success: Status code 200 is in the accepted range: 200,404 00:00:15.045 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_52a4134875252629d5d87a15dc337c6bfe0b3746.tar.gz 00:02:42.070 [Pipeline] } 00:02:42.085 [Pipeline] // retry 00:02:42.092 [Pipeline] sh 00:02:42.371 + tar --no-same-owner -xf spdk_52a4134875252629d5d87a15dc337c6bfe0b3746.tar.gz 00:02:44.914 [Pipeline] sh 00:02:45.192 + git -C spdk log --oneline -n5 00:02:45.192 52a413487 bdev: do not retry nomem I/Os during aborting them 00:02:45.192 d13942918 bdev: simplify bdev_reset_freeze_channel 00:02:45.192 0edc184ec accel/mlx5: Support mkey registration 00:02:45.192 06358c250 bdev/nvme: use poll_group's fd_group to register interrupts 00:02:45.192 1ae735a5d nvme: add poll_group interrupt callback 00:02:45.209 [Pipeline] writeFile 00:02:45.224 [Pipeline] sh 00:02:45.545 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:45.556 [Pipeline] sh 00:02:45.834 + cat autorun-spdk.conf 00:02:45.834 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:45.834 SPDK_TEST_NVME=1 00:02:45.834 SPDK_TEST_FTL=1 00:02:45.834 SPDK_TEST_ISAL=1 00:02:45.834 SPDK_RUN_ASAN=1 00:02:45.834 SPDK_RUN_UBSAN=1 00:02:45.834 SPDK_TEST_XNVME=1 00:02:45.834 SPDK_TEST_NVME_FDP=1 00:02:45.834 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:45.841 RUN_NIGHTLY=0 00:02:45.843 [Pipeline] } 00:02:45.856 [Pipeline] // stage 00:02:45.871 [Pipeline] stage 00:02:45.874 [Pipeline] { (Run VM) 00:02:45.887 [Pipeline] sh 00:02:46.166 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:46.166 + echo 'Start stage prepare_nvme.sh' 00:02:46.166 Start stage prepare_nvme.sh 00:02:46.166 + [[ -n 1 ]] 00:02:46.166 + disk_prefix=ex1 00:02:46.166 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:46.166 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:46.166 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:46.166 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:46.166 ++ SPDK_TEST_NVME=1 00:02:46.166 ++ SPDK_TEST_FTL=1 00:02:46.166 ++ SPDK_TEST_ISAL=1 00:02:46.166 ++ SPDK_RUN_ASAN=1 00:02:46.166 ++ SPDK_RUN_UBSAN=1 00:02:46.166 ++ SPDK_TEST_XNVME=1 00:02:46.166 ++ SPDK_TEST_NVME_FDP=1 00:02:46.166 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:46.166 ++ RUN_NIGHTLY=0 00:02:46.166 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:46.166 + nvme_files=() 00:02:46.166 + declare -A nvme_files 00:02:46.166 + backend_dir=/var/lib/libvirt/images/backends 00:02:46.166 + nvme_files['nvme.img']=5G 00:02:46.166 + nvme_files['nvme-cmb.img']=5G 00:02:46.166 + nvme_files['nvme-multi0.img']=4G 00:02:46.166 + nvme_files['nvme-multi1.img']=4G 00:02:46.166 + nvme_files['nvme-multi2.img']=4G 00:02:46.166 + nvme_files['nvme-openstack.img']=8G 00:02:46.166 + nvme_files['nvme-zns.img']=5G 00:02:46.166 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:46.166 + (( SPDK_TEST_FTL == 1 )) 00:02:46.166 + nvme_files["nvme-ftl.img"]=6G 00:02:46.166 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:46.166 + nvme_files["nvme-fdp.img"]=1G 00:02:46.166 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:46.166 + for nvme in "${!nvme_files[@]}" 00:02:46.166 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:02:46.166 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:46.166 + for nvme in "${!nvme_files[@]}" 00:02:46.166 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:02:46.166 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:46.166 + for nvme in "${!nvme_files[@]}" 00:02:46.166 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:02:46.166 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:46.166 + for nvme in "${!nvme_files[@]}" 00:02:46.166 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:02:46.425 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:46.425 + for nvme in "${!nvme_files[@]}" 00:02:46.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:02:46.425 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:46.425 + for nvme in "${!nvme_files[@]}" 00:02:46.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:02:46.425 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:46.425 + for nvme in "${!nvme_files[@]}" 00:02:46.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:02:46.425 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:46.425 + for nvme in "${!nvme_files[@]}" 00:02:46.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:02:46.683 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:46.683 + for nvme in "${!nvme_files[@]}" 00:02:46.683 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:02:46.683 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:46.683 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:02:46.683 + echo 'End stage prepare_nvme.sh' 00:02:46.683 End stage prepare_nvme.sh 00:02:46.695 [Pipeline] sh 00:02:46.975 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:46.975 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:46.975 00:02:46.975 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:46.975 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:46.975 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:46.975 HELP=0 00:02:46.975 DRY_RUN=0 00:02:46.975 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:02:46.975 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:46.975 NVME_AUTO_CREATE=0 00:02:46.975 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:02:46.975 NVME_CMB=,,,, 00:02:46.975 NVME_PMR=,,,, 00:02:46.975 NVME_ZNS=,,,, 00:02:46.975 NVME_MS=true,,,, 00:02:46.975 NVME_FDP=,,,on, 00:02:46.975 SPDK_VAGRANT_DISTRO=fedora39 00:02:46.975 SPDK_VAGRANT_VMCPU=10 00:02:46.975 SPDK_VAGRANT_VMRAM=12288 00:02:46.975 SPDK_VAGRANT_PROVIDER=libvirt 00:02:46.975 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:46.975 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:46.975 SPDK_OPENSTACK_NETWORK=0 00:02:46.975 VAGRANT_PACKAGE_BOX=0 00:02:46.975 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:46.975 FORCE_DISTRO=true 00:02:46.975 VAGRANT_BOX_VERSION= 00:02:46.975 EXTRA_VAGRANTFILES= 00:02:46.975 NIC_MODEL=e1000 00:02:46.975 00:02:46.975 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:46.975 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:50.266 Bringing machine 'default' up with 'libvirt' provider... 00:02:51.203 ==> default: Creating image (snapshot of base box volume). 00:02:51.203 ==> default: Creating domain with the following settings... 00:02:51.203 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733829017_015c89cd1ea580c9ea28 00:02:51.203 ==> default: -- Domain type: kvm 00:02:51.203 ==> default: -- Cpus: 10 00:02:51.203 ==> default: -- Feature: acpi 00:02:51.203 ==> default: -- Feature: apic 00:02:51.203 ==> default: -- Feature: pae 00:02:51.203 ==> default: -- Memory: 12288M 00:02:51.203 ==> default: -- Memory Backing: hugepages: 00:02:51.203 ==> default: -- Management MAC: 00:02:51.203 ==> default: -- Loader: 00:02:51.203 ==> default: -- Nvram: 00:02:51.203 ==> default: -- Base box: spdk/fedora39 00:02:51.203 ==> default: -- Storage pool: default 00:02:51.203 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733829017_015c89cd1ea580c9ea28.img (20G) 00:02:51.203 ==> default: -- Volume Cache: default 00:02:51.203 ==> default: -- Kernel: 00:02:51.203 ==> default: -- Initrd: 00:02:51.203 ==> default: -- Graphics Type: vnc 00:02:51.203 ==> default: -- Graphics Port: -1 00:02:51.203 ==> default: -- Graphics IP: 127.0.0.1 00:02:51.203 ==> default: -- Graphics Password: Not defined 00:02:51.203 ==> default: -- Video Type: cirrus 00:02:51.203 ==> default: -- Video VRAM: 9216 00:02:51.203 ==> default: -- Sound Type: 00:02:51.203 ==> default: -- Keymap: en-us 00:02:51.203 ==> default: -- TPM Path: 00:02:51.203 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:51.203 ==> default: -- Command line args: 00:02:51.203 ==> default: -> value=-device, 00:02:51.203 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:51.203 ==> default: -> value=-drive, 00:02:51.203 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:51.203 ==> default: -> value=-device, 00:02:51.203 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:51.203 ==> default: -> value=-device, 00:02:51.203 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:51.203 ==> default: -> value=-drive, 00:02:51.203 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:02:51.203 ==> default: -> value=-device, 00:02:51.203 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:51.203 ==> default: -> value=-device, 00:02:51.203 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:51.203 ==> default: -> value=-drive, 00:02:51.203 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:51.203 ==> default: -> value=-device, 00:02:51.203 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:51.203 ==> default: -> value=-drive, 00:02:51.203 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:51.203 ==> default: -> value=-device, 00:02:51.203 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:51.203 ==> default: -> value=-drive, 00:02:51.203 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:51.203 ==> default: -> value=-device, 00:02:51.203 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:51.203 ==> default: -> value=-device, 00:02:51.203 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:51.203 ==> default: -> value=-device, 00:02:51.203 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:51.203 ==> default: -> value=-drive, 00:02:51.203 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:51.203 ==> default: -> value=-device, 00:02:51.203 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:51.772 ==> default: Creating shared folders metadata... 00:02:51.773 ==> default: Starting domain. 00:02:53.149 ==> default: Waiting for domain to get an IP address... 00:03:11.325 ==> default: Waiting for SSH to become available... 00:03:11.325 ==> default: Configuring and enabling network interfaces... 00:03:16.627 default: SSH address: 192.168.121.24:22 00:03:16.627 default: SSH username: vagrant 00:03:16.627 default: SSH auth method: private key 00:03:18.603 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:28.585 ==> default: Mounting SSHFS shared folder... 00:03:29.972 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:29.972 ==> default: Checking Mount.. 00:03:31.347 ==> default: Folder Successfully Mounted! 00:03:31.347 ==> default: Running provisioner: file... 00:03:32.284 default: ~/.gitconfig => .gitconfig 00:03:32.851 00:03:32.851 SUCCESS! 00:03:32.851 00:03:32.851 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:32.851 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:32.851 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:32.851 00:03:32.860 [Pipeline] } 00:03:32.874 [Pipeline] // stage 00:03:32.884 [Pipeline] dir 00:03:32.885 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:03:32.887 [Pipeline] { 00:03:32.899 [Pipeline] catchError 00:03:32.900 [Pipeline] { 00:03:32.912 [Pipeline] sh 00:03:33.193 + vagrant ssh-config --host vagrant 00:03:33.193 + sed -ne /^Host/,$p 00:03:33.193 + tee ssh_conf 00:03:36.493 Host vagrant 00:03:36.493 HostName 192.168.121.24 00:03:36.493 User vagrant 00:03:36.493 Port 22 00:03:36.493 UserKnownHostsFile /dev/null 00:03:36.493 StrictHostKeyChecking no 00:03:36.493 PasswordAuthentication no 00:03:36.493 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:36.493 IdentitiesOnly yes 00:03:36.493 LogLevel FATAL 00:03:36.493 ForwardAgent yes 00:03:36.493 ForwardX11 yes 00:03:36.493 00:03:36.508 [Pipeline] withEnv 00:03:36.511 [Pipeline] { 00:03:36.525 [Pipeline] sh 00:03:36.807 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:36.807 source /etc/os-release 00:03:36.807 [[ -e /image.version ]] && img=$(< /image.version) 00:03:36.807 # Minimal, systemd-like check. 00:03:36.807 if [[ -e /.dockerenv ]]; then 00:03:36.807 # Clear garbage from the node's name: 00:03:36.807 # agt-er_autotest_547-896 -> autotest_547-896 00:03:36.807 # $HOSTNAME is the actual container id 00:03:36.807 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:36.807 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:36.807 # We can assume this is a mount from a host where container is running, 00:03:36.807 # so fetch its hostname to easily identify the target swarm worker. 00:03:36.807 container="$(< /etc/hostname) ($agent)" 00:03:36.807 else 00:03:36.807 # Fallback 00:03:36.807 container=$agent 00:03:36.807 fi 00:03:36.807 fi 00:03:36.807 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:36.807 00:03:37.078 [Pipeline] } 00:03:37.093 [Pipeline] // withEnv 00:03:37.102 [Pipeline] setCustomBuildProperty 00:03:37.117 [Pipeline] stage 00:03:37.119 [Pipeline] { (Tests) 00:03:37.135 [Pipeline] sh 00:03:37.417 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:37.688 [Pipeline] sh 00:03:37.969 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:38.247 [Pipeline] timeout 00:03:38.247 Timeout set to expire in 50 min 00:03:38.249 [Pipeline] { 00:03:38.265 [Pipeline] sh 00:03:38.538 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:39.105 HEAD is now at 52a413487 bdev: do not retry nomem I/Os during aborting them 00:03:39.116 [Pipeline] sh 00:03:39.397 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:39.670 [Pipeline] sh 00:03:40.013 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:40.287 [Pipeline] sh 00:03:40.568 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:03:40.828 ++ readlink -f spdk_repo 00:03:40.828 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:40.828 + [[ -n /home/vagrant/spdk_repo ]] 00:03:40.828 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:40.828 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:40.828 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:40.828 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:40.828 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:40.828 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:40.828 + cd /home/vagrant/spdk_repo 00:03:40.828 + source /etc/os-release 00:03:40.828 ++ NAME='Fedora Linux' 00:03:40.828 ++ VERSION='39 (Cloud Edition)' 00:03:40.828 ++ ID=fedora 00:03:40.828 ++ VERSION_ID=39 00:03:40.828 ++ VERSION_CODENAME= 00:03:40.828 ++ PLATFORM_ID=platform:f39 00:03:40.828 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:40.828 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:40.828 ++ LOGO=fedora-logo-icon 00:03:40.828 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:40.828 ++ HOME_URL=https://fedoraproject.org/ 00:03:40.828 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:40.828 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:40.828 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:40.828 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:40.828 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:40.828 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:40.828 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:40.828 ++ SUPPORT_END=2024-11-12 00:03:40.828 ++ VARIANT='Cloud Edition' 00:03:40.828 ++ VARIANT_ID=cloud 00:03:40.828 + uname -a 00:03:40.828 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:40.828 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:41.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:41.655 Hugepages 00:03:41.655 node hugesize free / total 00:03:41.655 node0 1048576kB 0 / 0 00:03:41.655 node0 2048kB 0 / 0 00:03:41.655 00:03:41.655 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:41.655 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:41.655 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:41.655 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:41.914 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:41.914 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:41.914 + rm -f /tmp/spdk-ld-path 00:03:41.914 + source autorun-spdk.conf 00:03:41.914 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:41.914 ++ SPDK_TEST_NVME=1 00:03:41.914 ++ SPDK_TEST_FTL=1 00:03:41.914 ++ SPDK_TEST_ISAL=1 00:03:41.914 ++ SPDK_RUN_ASAN=1 00:03:41.914 ++ SPDK_RUN_UBSAN=1 00:03:41.914 ++ SPDK_TEST_XNVME=1 00:03:41.914 ++ SPDK_TEST_NVME_FDP=1 00:03:41.914 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:41.914 ++ RUN_NIGHTLY=0 00:03:41.914 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:41.914 + [[ -n '' ]] 00:03:41.914 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:41.914 + for M in /var/spdk/build-*-manifest.txt 00:03:41.914 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:41.914 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:41.914 + for M in /var/spdk/build-*-manifest.txt 00:03:41.914 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:41.914 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:41.914 + for M in /var/spdk/build-*-manifest.txt 00:03:41.914 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:41.914 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:41.914 ++ uname 00:03:41.914 + [[ Linux == \L\i\n\u\x ]] 00:03:41.914 + sudo dmesg -T 00:03:41.914 + sudo dmesg --clear 00:03:41.914 + dmesg_pid=5251 00:03:41.914 + [[ Fedora Linux == FreeBSD ]] 00:03:41.914 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:41.914 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:41.914 + sudo dmesg -Tw 00:03:41.914 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:41.914 + [[ -x /usr/src/fio-static/fio ]] 00:03:41.914 + export FIO_BIN=/usr/src/fio-static/fio 00:03:41.914 + FIO_BIN=/usr/src/fio-static/fio 00:03:41.914 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:41.914 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:41.914 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:41.914 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:41.914 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:41.914 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:41.914 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:41.914 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:41.914 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:42.174 11:11:09 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:42.174 11:11:09 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:42.174 11:11:09 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:42.174 11:11:09 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:42.174 11:11:09 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:42.174 11:11:09 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:42.174 11:11:09 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:42.174 11:11:09 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:42.174 11:11:09 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:42.174 11:11:09 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:42.174 11:11:09 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:42.174 11:11:09 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:03:42.174 11:11:09 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:42.174 11:11:09 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:42.174 11:11:09 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:42.174 11:11:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:42.174 11:11:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:42.174 11:11:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:42.174 11:11:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:42.174 11:11:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:42.174 11:11:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.174 11:11:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.174 11:11:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.174 11:11:09 -- paths/export.sh@5 -- $ export PATH 00:03:42.174 11:11:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.174 11:11:09 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:42.174 11:11:09 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:42.174 11:11:09 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733829069.XXXXXX 00:03:42.174 11:11:09 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733829069.wDRbjy 00:03:42.174 11:11:09 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:42.174 11:11:09 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:42.174 11:11:09 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:42.174 11:11:09 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:42.174 11:11:09 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:42.174 11:11:09 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:42.174 11:11:09 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:42.174 11:11:09 -- common/autotest_common.sh@10 -- $ set +x 00:03:42.174 11:11:09 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:03:42.174 11:11:09 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:42.174 11:11:09 -- pm/common@17 -- $ local monitor 00:03:42.174 11:11:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.174 11:11:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.174 11:11:09 -- pm/common@21 -- $ date +%s 00:03:42.174 11:11:09 -- pm/common@25 -- $ sleep 1 00:03:42.174 11:11:09 -- pm/common@21 -- $ date +%s 00:03:42.174 11:11:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733829069 00:03:42.174 11:11:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733829069 00:03:42.174 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733829069_collect-vmstat.pm.log 00:03:42.174 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733829069_collect-cpu-load.pm.log 00:03:43.111 11:11:10 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:43.111 11:11:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:43.111 11:11:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:43.111 11:11:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:43.111 11:11:10 -- spdk/autobuild.sh@16 -- $ date -u 00:03:43.370 Tue Dec 10 11:11:10 AM UTC 2024 00:03:43.370 11:11:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:43.370 v25.01-pre-324-g52a413487 00:03:43.370 11:11:10 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:43.370 11:11:10 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:43.370 11:11:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:43.370 11:11:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:43.370 11:11:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:43.370 ************************************ 00:03:43.370 START TEST asan 00:03:43.370 ************************************ 00:03:43.370 using asan 00:03:43.370 11:11:10 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:43.370 00:03:43.370 real 0m0.001s 00:03:43.370 user 0m0.000s 00:03:43.370 sys 0m0.001s 00:03:43.370 11:11:10 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:43.370 ************************************ 00:03:43.370 11:11:10 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:43.370 END TEST asan 00:03:43.370 ************************************ 00:03:43.370 11:11:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:43.370 11:11:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:43.370 11:11:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:43.370 11:11:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:43.370 11:11:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:43.370 ************************************ 00:03:43.370 START TEST ubsan 00:03:43.370 ************************************ 00:03:43.370 using ubsan 00:03:43.370 11:11:10 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:43.370 00:03:43.370 real 0m0.000s 00:03:43.370 user 0m0.000s 00:03:43.370 sys 0m0.000s 00:03:43.370 11:11:10 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:43.370 ************************************ 00:03:43.370 END TEST ubsan 00:03:43.370 11:11:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:43.370 ************************************ 00:03:43.370 11:11:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:43.370 11:11:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:43.370 11:11:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:43.370 11:11:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:43.370 11:11:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:43.370 11:11:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:43.370 11:11:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:43.370 11:11:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:43.370 11:11:10 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:03:43.629 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:43.629 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:44.224 Using 'verbs' RDMA provider 00:04:00.080 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:18.166 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:18.166 Creating mk/config.mk...done. 00:04:18.166 Creating mk/cc.flags.mk...done. 00:04:18.166 Type 'make' to build. 00:04:18.166 11:11:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:18.166 11:11:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:18.166 11:11:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:18.166 11:11:43 -- common/autotest_common.sh@10 -- $ set +x 00:04:18.166 ************************************ 00:04:18.166 START TEST make 00:04:18.166 ************************************ 00:04:18.166 11:11:43 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:18.166 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:18.166 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:18.166 meson setup builddir \ 00:04:18.166 -Dwith-libaio=enabled \ 00:04:18.166 -Dwith-liburing=enabled \ 00:04:18.166 -Dwith-libvfn=disabled \ 00:04:18.166 -Dwith-spdk=disabled \ 00:04:18.166 -Dexamples=false \ 00:04:18.166 -Dtests=false \ 00:04:18.166 -Dtools=false && \ 00:04:18.166 meson compile -C builddir && \ 00:04:18.166 cd -) 00:04:18.166 make[1]: Nothing to be done for 'all'. 00:04:20.069 The Meson build system 00:04:20.069 Version: 1.5.0 00:04:20.069 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:20.069 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:20.069 Build type: native build 00:04:20.069 Project name: xnvme 00:04:20.069 Project version: 0.7.5 00:04:20.069 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:20.069 C linker for the host machine: cc ld.bfd 2.40-14 00:04:20.069 Host machine cpu family: x86_64 00:04:20.069 Host machine cpu: x86_64 00:04:20.069 Message: host_machine.system: linux 00:04:20.069 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:20.069 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:20.069 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:20.069 Run-time dependency threads found: YES 00:04:20.069 Has header "setupapi.h" : NO 00:04:20.069 Has header "linux/blkzoned.h" : YES 00:04:20.069 Has header "linux/blkzoned.h" : YES (cached) 00:04:20.069 Has header "libaio.h" : YES 00:04:20.069 Library aio found: YES 00:04:20.069 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:20.069 Run-time dependency liburing found: YES 2.2 00:04:20.069 Dependency libvfn skipped: feature with-libvfn disabled 00:04:20.069 Found CMake: /usr/bin/cmake (3.27.7) 00:04:20.069 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:04:20.069 Subproject spdk : skipped: feature with-spdk disabled 00:04:20.069 Run-time dependency appleframeworks found: NO (tried framework) 00:04:20.069 Run-time dependency appleframeworks found: NO (tried framework) 00:04:20.069 Library rt found: YES 00:04:20.069 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:20.069 Configuring xnvme_config.h using configuration 00:04:20.069 Configuring xnvme.spec using configuration 00:04:20.069 Run-time dependency bash-completion found: YES 2.11 00:04:20.069 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:20.069 Program cp found: YES (/usr/bin/cp) 00:04:20.069 Build targets in project: 3 00:04:20.069 00:04:20.069 xnvme 0.7.5 00:04:20.069 00:04:20.069 Subprojects 00:04:20.069 spdk : NO Feature 'with-spdk' disabled 00:04:20.069 00:04:20.069 User defined options 00:04:20.069 examples : false 00:04:20.069 tests : false 00:04:20.069 tools : false 00:04:20.069 with-libaio : enabled 00:04:20.069 with-liburing: enabled 00:04:20.069 with-libvfn : disabled 00:04:20.069 with-spdk : disabled 00:04:20.069 00:04:20.069 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:20.329 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:20.329 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:04:20.329 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:04:20.329 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:04:20.329 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:04:20.329 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:04:20.329 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:04:20.329 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:04:20.329 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:04:20.329 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:04:20.329 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:04:20.329 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:04:20.329 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:04:20.588 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:04:20.588 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:04:20.588 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:04:20.588 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:04:20.588 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:04:20.588 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:04:20.588 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:04:20.588 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:04:20.588 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:04:20.588 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:04:20.588 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:04:20.588 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:04:20.588 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:04:20.588 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:04:20.588 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:04:20.588 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:04:20.588 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:04:20.588 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:04:20.588 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:04:20.588 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:04:20.588 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:04:20.588 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:04:20.588 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:04:20.588 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:04:20.588 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:04:20.588 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:04:20.588 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:04:20.588 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:04:20.861 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:04:20.861 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:04:20.861 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:04:20.861 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:04:20.861 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:04:20.861 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:04:20.861 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:04:20.861 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:04:20.861 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:04:20.861 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:04:20.861 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:04:20.861 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:04:20.861 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:04:20.861 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:04:20.861 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:04:20.861 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:04:20.861 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:04:20.861 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:04:20.861 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:04:20.861 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:04:20.861 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:04:20.861 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:04:20.861 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:04:21.135 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:04:21.135 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:04:21.135 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:04:21.135 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:04:21.135 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:04:21.135 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:04:21.135 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:04:21.135 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:04:21.135 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:04:21.135 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:04:21.704 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:04:21.704 [75/76] Linking static target lib/libxnvme.a 00:04:21.704 [76/76] Linking target lib/libxnvme.so.0.7.5 00:04:21.704 INFO: autodetecting backend as ninja 00:04:21.704 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:21.704 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:29.838 The Meson build system 00:04:29.838 Version: 1.5.0 00:04:29.838 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:29.838 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:29.838 Build type: native build 00:04:29.838 Program cat found: YES (/usr/bin/cat) 00:04:29.838 Project name: DPDK 00:04:29.838 Project version: 24.03.0 00:04:29.838 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:29.838 C linker for the host machine: cc ld.bfd 2.40-14 00:04:29.838 Host machine cpu family: x86_64 00:04:29.838 Host machine cpu: x86_64 00:04:29.838 Message: ## Building in Developer Mode ## 00:04:29.838 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:29.838 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:29.838 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:29.838 Program python3 found: YES (/usr/bin/python3) 00:04:29.838 Program cat found: YES (/usr/bin/cat) 00:04:29.838 Compiler for C supports arguments -march=native: YES 00:04:29.838 Checking for size of "void *" : 8 00:04:29.838 Checking for size of "void *" : 8 (cached) 00:04:29.838 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:29.838 Library m found: YES 00:04:29.838 Library numa found: YES 00:04:29.838 Has header "numaif.h" : YES 00:04:29.838 Library fdt found: NO 00:04:29.838 Library execinfo found: NO 00:04:29.838 Has header "execinfo.h" : YES 00:04:29.838 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:29.838 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:29.838 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:29.838 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:29.838 Run-time dependency openssl found: YES 3.1.1 00:04:29.838 Run-time dependency libpcap found: YES 1.10.4 00:04:29.838 Has header "pcap.h" with dependency libpcap: YES 00:04:29.838 Compiler for C supports arguments -Wcast-qual: YES 00:04:29.838 Compiler for C supports arguments -Wdeprecated: YES 00:04:29.838 Compiler for C supports arguments -Wformat: YES 00:04:29.838 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:29.838 Compiler for C supports arguments -Wformat-security: NO 00:04:29.838 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:29.838 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:29.838 Compiler for C supports arguments -Wnested-externs: YES 00:04:29.838 Compiler for C supports arguments -Wold-style-definition: YES 00:04:29.838 Compiler for C supports arguments -Wpointer-arith: YES 00:04:29.838 Compiler for C supports arguments -Wsign-compare: YES 00:04:29.838 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:29.838 Compiler for C supports arguments -Wundef: YES 00:04:29.838 Compiler for C supports arguments -Wwrite-strings: YES 00:04:29.838 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:29.838 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:29.838 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:29.838 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:29.838 Program objdump found: YES (/usr/bin/objdump) 00:04:29.838 Compiler for C supports arguments -mavx512f: YES 00:04:29.838 Checking if "AVX512 checking" compiles: YES 00:04:29.838 Fetching value of define "__SSE4_2__" : 1 00:04:29.838 Fetching value of define "__AES__" : 1 00:04:29.838 Fetching value of define "__AVX__" : 1 00:04:29.838 Fetching value of define "__AVX2__" : 1 00:04:29.838 Fetching value of define "__AVX512BW__" : 1 00:04:29.838 Fetching value of define "__AVX512CD__" : 1 00:04:29.838 Fetching value of define "__AVX512DQ__" : 1 00:04:29.838 Fetching value of define "__AVX512F__" : 1 00:04:29.838 Fetching value of define "__AVX512VL__" : 1 00:04:29.838 Fetching value of define "__PCLMUL__" : 1 00:04:29.838 Fetching value of define "__RDRND__" : 1 00:04:29.838 Fetching value of define "__RDSEED__" : 1 00:04:29.838 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:29.838 Fetching value of define "__znver1__" : (undefined) 00:04:29.838 Fetching value of define "__znver2__" : (undefined) 00:04:29.838 Fetching value of define "__znver3__" : (undefined) 00:04:29.838 Fetching value of define "__znver4__" : (undefined) 00:04:29.838 Library asan found: YES 00:04:29.838 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:29.838 Message: lib/log: Defining dependency "log" 00:04:29.838 Message: lib/kvargs: Defining dependency "kvargs" 00:04:29.838 Message: lib/telemetry: Defining dependency "telemetry" 00:04:29.838 Library rt found: YES 00:04:29.838 Checking for function "getentropy" : NO 00:04:29.838 Message: lib/eal: Defining dependency "eal" 00:04:29.839 Message: lib/ring: Defining dependency "ring" 00:04:29.839 Message: lib/rcu: Defining dependency "rcu" 00:04:29.839 Message: lib/mempool: Defining dependency "mempool" 00:04:29.839 Message: lib/mbuf: Defining dependency "mbuf" 00:04:29.839 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:29.839 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:29.839 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:29.839 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:29.839 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:29.839 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:29.839 Compiler for C supports arguments -mpclmul: YES 00:04:29.839 Compiler for C supports arguments -maes: YES 00:04:29.839 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:29.839 Compiler for C supports arguments -mavx512bw: YES 00:04:29.839 Compiler for C supports arguments -mavx512dq: YES 00:04:29.839 Compiler for C supports arguments -mavx512vl: YES 00:04:29.839 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:29.839 Compiler for C supports arguments -mavx2: YES 00:04:29.839 Compiler for C supports arguments -mavx: YES 00:04:29.839 Message: lib/net: Defining dependency "net" 00:04:29.839 Message: lib/meter: Defining dependency "meter" 00:04:29.839 Message: lib/ethdev: Defining dependency "ethdev" 00:04:29.839 Message: lib/pci: Defining dependency "pci" 00:04:29.839 Message: lib/cmdline: Defining dependency "cmdline" 00:04:29.839 Message: lib/hash: Defining dependency "hash" 00:04:29.839 Message: lib/timer: Defining dependency "timer" 00:04:29.839 Message: lib/compressdev: Defining dependency "compressdev" 00:04:29.839 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:29.839 Message: lib/dmadev: Defining dependency "dmadev" 00:04:29.839 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:29.839 Message: lib/power: Defining dependency "power" 00:04:29.839 Message: lib/reorder: Defining dependency "reorder" 00:04:29.839 Message: lib/security: Defining dependency "security" 00:04:29.839 Has header "linux/userfaultfd.h" : YES 00:04:29.839 Has header "linux/vduse.h" : YES 00:04:29.839 Message: lib/vhost: Defining dependency "vhost" 00:04:29.839 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:29.839 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:29.839 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:29.839 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:29.839 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:29.839 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:29.839 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:29.839 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:29.839 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:29.839 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:29.839 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:29.839 Configuring doxy-api-html.conf using configuration 00:04:29.839 Configuring doxy-api-man.conf using configuration 00:04:29.839 Program mandb found: YES (/usr/bin/mandb) 00:04:29.839 Program sphinx-build found: NO 00:04:29.839 Configuring rte_build_config.h using configuration 00:04:29.839 Message: 00:04:29.839 ================= 00:04:29.839 Applications Enabled 00:04:29.839 ================= 00:04:29.839 00:04:29.839 apps: 00:04:29.839 00:04:29.839 00:04:29.839 Message: 00:04:29.839 ================= 00:04:29.839 Libraries Enabled 00:04:29.839 ================= 00:04:29.839 00:04:29.839 libs: 00:04:29.839 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:29.839 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:29.839 cryptodev, dmadev, power, reorder, security, vhost, 00:04:29.839 00:04:29.839 Message: 00:04:29.839 =============== 00:04:29.839 Drivers Enabled 00:04:29.839 =============== 00:04:29.839 00:04:29.839 common: 00:04:29.839 00:04:29.839 bus: 00:04:29.839 pci, vdev, 00:04:29.839 mempool: 00:04:29.839 ring, 00:04:29.839 dma: 00:04:29.839 00:04:29.839 net: 00:04:29.839 00:04:29.839 crypto: 00:04:29.839 00:04:29.839 compress: 00:04:29.839 00:04:29.839 vdpa: 00:04:29.839 00:04:29.839 00:04:29.839 Message: 00:04:29.839 ================= 00:04:29.839 Content Skipped 00:04:29.839 ================= 00:04:29.839 00:04:29.839 apps: 00:04:29.839 dumpcap: explicitly disabled via build config 00:04:29.839 graph: explicitly disabled via build config 00:04:29.839 pdump: explicitly disabled via build config 00:04:29.839 proc-info: explicitly disabled via build config 00:04:29.839 test-acl: explicitly disabled via build config 00:04:29.839 test-bbdev: explicitly disabled via build config 00:04:29.839 test-cmdline: explicitly disabled via build config 00:04:29.839 test-compress-perf: explicitly disabled via build config 00:04:29.839 test-crypto-perf: explicitly disabled via build config 00:04:29.839 test-dma-perf: explicitly disabled via build config 00:04:29.839 test-eventdev: explicitly disabled via build config 00:04:29.839 test-fib: explicitly disabled via build config 00:04:29.839 test-flow-perf: explicitly disabled via build config 00:04:29.839 test-gpudev: explicitly disabled via build config 00:04:29.839 test-mldev: explicitly disabled via build config 00:04:29.839 test-pipeline: explicitly disabled via build config 00:04:29.839 test-pmd: explicitly disabled via build config 00:04:29.839 test-regex: explicitly disabled via build config 00:04:29.839 test-sad: explicitly disabled via build config 00:04:29.839 test-security-perf: explicitly disabled via build config 00:04:29.839 00:04:29.839 libs: 00:04:29.839 argparse: explicitly disabled via build config 00:04:29.839 metrics: explicitly disabled via build config 00:04:29.839 acl: explicitly disabled via build config 00:04:29.839 bbdev: explicitly disabled via build config 00:04:29.839 bitratestats: explicitly disabled via build config 00:04:29.839 bpf: explicitly disabled via build config 00:04:29.839 cfgfile: explicitly disabled via build config 00:04:29.839 distributor: explicitly disabled via build config 00:04:29.839 efd: explicitly disabled via build config 00:04:29.839 eventdev: explicitly disabled via build config 00:04:29.839 dispatcher: explicitly disabled via build config 00:04:29.839 gpudev: explicitly disabled via build config 00:04:29.839 gro: explicitly disabled via build config 00:04:29.839 gso: explicitly disabled via build config 00:04:29.839 ip_frag: explicitly disabled via build config 00:04:29.839 jobstats: explicitly disabled via build config 00:04:29.839 latencystats: explicitly disabled via build config 00:04:29.839 lpm: explicitly disabled via build config 00:04:29.839 member: explicitly disabled via build config 00:04:29.839 pcapng: explicitly disabled via build config 00:04:29.839 rawdev: explicitly disabled via build config 00:04:29.839 regexdev: explicitly disabled via build config 00:04:29.839 mldev: explicitly disabled via build config 00:04:29.839 rib: explicitly disabled via build config 00:04:29.839 sched: explicitly disabled via build config 00:04:29.839 stack: explicitly disabled via build config 00:04:29.839 ipsec: explicitly disabled via build config 00:04:29.839 pdcp: explicitly disabled via build config 00:04:29.839 fib: explicitly disabled via build config 00:04:29.839 port: explicitly disabled via build config 00:04:29.839 pdump: explicitly disabled via build config 00:04:29.839 table: explicitly disabled via build config 00:04:29.839 pipeline: explicitly disabled via build config 00:04:29.839 graph: explicitly disabled via build config 00:04:29.839 node: explicitly disabled via build config 00:04:29.839 00:04:29.839 drivers: 00:04:29.839 common/cpt: not in enabled drivers build config 00:04:29.839 common/dpaax: not in enabled drivers build config 00:04:29.839 common/iavf: not in enabled drivers build config 00:04:29.839 common/idpf: not in enabled drivers build config 00:04:29.839 common/ionic: not in enabled drivers build config 00:04:29.839 common/mvep: not in enabled drivers build config 00:04:29.839 common/octeontx: not in enabled drivers build config 00:04:29.839 bus/auxiliary: not in enabled drivers build config 00:04:29.839 bus/cdx: not in enabled drivers build config 00:04:29.839 bus/dpaa: not in enabled drivers build config 00:04:29.839 bus/fslmc: not in enabled drivers build config 00:04:29.839 bus/ifpga: not in enabled drivers build config 00:04:29.839 bus/platform: not in enabled drivers build config 00:04:29.839 bus/uacce: not in enabled drivers build config 00:04:29.839 bus/vmbus: not in enabled drivers build config 00:04:29.839 common/cnxk: not in enabled drivers build config 00:04:29.839 common/mlx5: not in enabled drivers build config 00:04:29.839 common/nfp: not in enabled drivers build config 00:04:29.839 common/nitrox: not in enabled drivers build config 00:04:29.839 common/qat: not in enabled drivers build config 00:04:29.839 common/sfc_efx: not in enabled drivers build config 00:04:29.839 mempool/bucket: not in enabled drivers build config 00:04:29.839 mempool/cnxk: not in enabled drivers build config 00:04:29.839 mempool/dpaa: not in enabled drivers build config 00:04:29.839 mempool/dpaa2: not in enabled drivers build config 00:04:29.839 mempool/octeontx: not in enabled drivers build config 00:04:29.839 mempool/stack: not in enabled drivers build config 00:04:29.839 dma/cnxk: not in enabled drivers build config 00:04:29.839 dma/dpaa: not in enabled drivers build config 00:04:29.839 dma/dpaa2: not in enabled drivers build config 00:04:29.839 dma/hisilicon: not in enabled drivers build config 00:04:29.839 dma/idxd: not in enabled drivers build config 00:04:29.839 dma/ioat: not in enabled drivers build config 00:04:29.839 dma/skeleton: not in enabled drivers build config 00:04:29.839 net/af_packet: not in enabled drivers build config 00:04:29.839 net/af_xdp: not in enabled drivers build config 00:04:29.839 net/ark: not in enabled drivers build config 00:04:29.839 net/atlantic: not in enabled drivers build config 00:04:29.839 net/avp: not in enabled drivers build config 00:04:29.839 net/axgbe: not in enabled drivers build config 00:04:29.839 net/bnx2x: not in enabled drivers build config 00:04:29.839 net/bnxt: not in enabled drivers build config 00:04:29.839 net/bonding: not in enabled drivers build config 00:04:29.839 net/cnxk: not in enabled drivers build config 00:04:29.839 net/cpfl: not in enabled drivers build config 00:04:29.839 net/cxgbe: not in enabled drivers build config 00:04:29.839 net/dpaa: not in enabled drivers build config 00:04:29.839 net/dpaa2: not in enabled drivers build config 00:04:29.839 net/e1000: not in enabled drivers build config 00:04:29.840 net/ena: not in enabled drivers build config 00:04:29.840 net/enetc: not in enabled drivers build config 00:04:29.840 net/enetfec: not in enabled drivers build config 00:04:29.840 net/enic: not in enabled drivers build config 00:04:29.840 net/failsafe: not in enabled drivers build config 00:04:29.840 net/fm10k: not in enabled drivers build config 00:04:29.840 net/gve: not in enabled drivers build config 00:04:29.840 net/hinic: not in enabled drivers build config 00:04:29.840 net/hns3: not in enabled drivers build config 00:04:29.840 net/i40e: not in enabled drivers build config 00:04:29.840 net/iavf: not in enabled drivers build config 00:04:29.840 net/ice: not in enabled drivers build config 00:04:29.840 net/idpf: not in enabled drivers build config 00:04:29.840 net/igc: not in enabled drivers build config 00:04:29.840 net/ionic: not in enabled drivers build config 00:04:29.840 net/ipn3ke: not in enabled drivers build config 00:04:29.840 net/ixgbe: not in enabled drivers build config 00:04:29.840 net/mana: not in enabled drivers build config 00:04:29.840 net/memif: not in enabled drivers build config 00:04:29.840 net/mlx4: not in enabled drivers build config 00:04:29.840 net/mlx5: not in enabled drivers build config 00:04:29.840 net/mvneta: not in enabled drivers build config 00:04:29.840 net/mvpp2: not in enabled drivers build config 00:04:29.840 net/netvsc: not in enabled drivers build config 00:04:29.840 net/nfb: not in enabled drivers build config 00:04:29.840 net/nfp: not in enabled drivers build config 00:04:29.840 net/ngbe: not in enabled drivers build config 00:04:29.840 net/null: not in enabled drivers build config 00:04:29.840 net/octeontx: not in enabled drivers build config 00:04:29.840 net/octeon_ep: not in enabled drivers build config 00:04:29.840 net/pcap: not in enabled drivers build config 00:04:29.840 net/pfe: not in enabled drivers build config 00:04:29.840 net/qede: not in enabled drivers build config 00:04:29.840 net/ring: not in enabled drivers build config 00:04:29.840 net/sfc: not in enabled drivers build config 00:04:29.840 net/softnic: not in enabled drivers build config 00:04:29.840 net/tap: not in enabled drivers build config 00:04:29.840 net/thunderx: not in enabled drivers build config 00:04:29.840 net/txgbe: not in enabled drivers build config 00:04:29.840 net/vdev_netvsc: not in enabled drivers build config 00:04:29.840 net/vhost: not in enabled drivers build config 00:04:29.840 net/virtio: not in enabled drivers build config 00:04:29.840 net/vmxnet3: not in enabled drivers build config 00:04:29.840 raw/*: missing internal dependency, "rawdev" 00:04:29.840 crypto/armv8: not in enabled drivers build config 00:04:29.840 crypto/bcmfs: not in enabled drivers build config 00:04:29.840 crypto/caam_jr: not in enabled drivers build config 00:04:29.840 crypto/ccp: not in enabled drivers build config 00:04:29.840 crypto/cnxk: not in enabled drivers build config 00:04:29.840 crypto/dpaa_sec: not in enabled drivers build config 00:04:29.840 crypto/dpaa2_sec: not in enabled drivers build config 00:04:29.840 crypto/ipsec_mb: not in enabled drivers build config 00:04:29.840 crypto/mlx5: not in enabled drivers build config 00:04:29.840 crypto/mvsam: not in enabled drivers build config 00:04:29.840 crypto/nitrox: not in enabled drivers build config 00:04:29.840 crypto/null: not in enabled drivers build config 00:04:29.840 crypto/octeontx: not in enabled drivers build config 00:04:29.840 crypto/openssl: not in enabled drivers build config 00:04:29.840 crypto/scheduler: not in enabled drivers build config 00:04:29.840 crypto/uadk: not in enabled drivers build config 00:04:29.840 crypto/virtio: not in enabled drivers build config 00:04:29.840 compress/isal: not in enabled drivers build config 00:04:29.840 compress/mlx5: not in enabled drivers build config 00:04:29.840 compress/nitrox: not in enabled drivers build config 00:04:29.840 compress/octeontx: not in enabled drivers build config 00:04:29.840 compress/zlib: not in enabled drivers build config 00:04:29.840 regex/*: missing internal dependency, "regexdev" 00:04:29.840 ml/*: missing internal dependency, "mldev" 00:04:29.840 vdpa/ifc: not in enabled drivers build config 00:04:29.840 vdpa/mlx5: not in enabled drivers build config 00:04:29.840 vdpa/nfp: not in enabled drivers build config 00:04:29.840 vdpa/sfc: not in enabled drivers build config 00:04:29.840 event/*: missing internal dependency, "eventdev" 00:04:29.840 baseband/*: missing internal dependency, "bbdev" 00:04:29.840 gpu/*: missing internal dependency, "gpudev" 00:04:29.840 00:04:29.840 00:04:29.840 Build targets in project: 85 00:04:29.840 00:04:29.840 DPDK 24.03.0 00:04:29.840 00:04:29.840 User defined options 00:04:29.840 buildtype : debug 00:04:29.840 default_library : shared 00:04:29.840 libdir : lib 00:04:29.840 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:29.840 b_sanitize : address 00:04:29.840 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:29.840 c_link_args : 00:04:29.840 cpu_instruction_set: native 00:04:29.840 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:29.840 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:29.840 enable_docs : false 00:04:29.840 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:29.840 enable_kmods : false 00:04:29.840 max_lcores : 128 00:04:29.840 tests : false 00:04:29.840 00:04:29.840 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:29.840 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:30.098 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:30.098 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:30.098 [3/268] Linking static target lib/librte_kvargs.a 00:04:30.098 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:30.098 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:30.098 [6/268] Linking static target lib/librte_log.a 00:04:30.357 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:30.357 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:30.616 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:30.616 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.616 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:30.616 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:30.616 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:30.616 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:30.616 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:30.875 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:30.875 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:30.875 [18/268] Linking static target lib/librte_telemetry.a 00:04:31.134 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.134 [20/268] Linking target lib/librte_log.so.24.1 00:04:31.134 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:31.134 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:31.134 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:31.393 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:31.393 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:31.393 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:31.393 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:31.393 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:31.393 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:31.393 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:31.393 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:31.393 [32/268] Linking target lib/librte_kvargs.so.24.1 00:04:31.652 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.652 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:31.652 [35/268] Linking target lib/librte_telemetry.so.24.1 00:04:31.652 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:31.910 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:31.910 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:31.910 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:31.910 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:31.910 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:31.910 [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:31.910 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:31.910 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:32.168 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:32.168 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:32.168 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:32.168 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:32.426 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:32.426 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:32.685 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:32.685 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:32.685 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:32.685 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:32.685 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:32.685 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:32.943 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:32.943 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:32.943 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:32.943 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:32.943 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:33.201 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:33.201 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:33.201 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:33.201 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:33.201 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:33.460 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:33.460 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:33.719 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:33.719 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:33.719 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:33.719 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:33.719 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:33.719 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:33.719 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:33.720 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:33.979 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:33.979 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:33.979 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:33.979 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:34.237 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:34.237 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:34.237 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:34.237 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:34.494 [85/268] Linking static target lib/librte_ring.a 00:04:34.494 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:34.494 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:34.494 [88/268] Linking static target lib/librte_eal.a 00:04:34.494 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:34.752 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:34.752 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:34.752 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:34.752 [93/268] Linking static target lib/librte_mempool.a 00:04:34.752 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:34.752 [95/268] Linking static target lib/librte_rcu.a 00:04:35.012 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:35.012 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:35.012 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.012 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:35.012 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:35.012 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:35.271 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:35.271 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:35.271 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.529 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:35.529 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:35.529 [107/268] Linking static target lib/librte_mbuf.a 00:04:35.530 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:35.530 [109/268] Linking static target lib/librte_meter.a 00:04:35.530 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:35.530 [111/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:35.530 [112/268] Linking static target lib/librte_net.a 00:04:35.788 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:35.788 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:36.048 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:36.048 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.048 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.048 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.307 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:36.565 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.565 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:36.565 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:36.826 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:36.826 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:36.826 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:36.826 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:36.826 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:36.826 [128/268] Linking static target lib/librte_pci.a 00:04:37.085 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:37.085 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:37.085 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:37.085 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:37.344 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:37.344 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:37.344 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:37.344 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:37.344 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:37.344 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:37.344 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:37.344 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:37.344 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:37.602 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:37.602 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:37.603 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:37.603 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:37.603 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:37.603 [147/268] Linking static target lib/librte_cmdline.a 00:04:37.862 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:38.122 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:38.122 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:38.385 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:38.385 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:38.385 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:38.385 [154/268] Linking static target lib/librte_timer.a 00:04:38.680 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:38.680 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:38.680 [157/268] Linking static target lib/librte_ethdev.a 00:04:38.939 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:38.939 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:38.939 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:38.939 [161/268] Linking static target lib/librte_hash.a 00:04:38.939 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:39.198 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.198 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:39.198 [165/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:39.198 [166/268] Linking static target lib/librte_compressdev.a 00:04:39.198 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:39.198 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:39.198 [169/268] Linking static target lib/librte_dmadev.a 00:04:39.457 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.457 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:39.457 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:39.716 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:39.716 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:39.975 [175/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:39.975 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:39.975 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:39.975 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:40.234 [179/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.234 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:40.234 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:40.234 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.234 [183/268] Linking static target lib/librte_cryptodev.a 00:04:40.234 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.801 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:40.801 [186/268] Linking static target lib/librte_power.a 00:04:40.801 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:40.801 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:40.801 [189/268] Linking static target lib/librte_reorder.a 00:04:40.801 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:40.801 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:41.060 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:41.060 [193/268] Linking static target lib/librte_security.a 00:04:41.318 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.318 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:41.884 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.884 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:42.142 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:42.142 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:42.142 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:42.142 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:42.401 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:42.401 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:42.401 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:42.684 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:42.684 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:42.684 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:42.684 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:42.944 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:42.944 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:42.944 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:42.944 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:43.202 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:43.202 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:43.202 [215/268] Linking static target drivers/librte_bus_vdev.a 00:04:43.202 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:43.202 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:43.202 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:43.202 [219/268] Linking static target drivers/librte_bus_pci.a 00:04:43.202 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:43.202 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:43.460 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:43.460 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:43.460 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:43.460 [225/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:43.460 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:43.719 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:44.654 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:47.196 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:47.196 [230/268] Linking target lib/librte_eal.so.24.1 00:04:47.196 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:47.196 [232/268] Linking target lib/librte_pci.so.24.1 00:04:47.196 [233/268] Linking target lib/librte_ring.so.24.1 00:04:47.196 [234/268] Linking target lib/librte_dmadev.so.24.1 00:04:47.196 [235/268] Linking target lib/librte_meter.so.24.1 00:04:47.196 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:47.196 [237/268] Linking target lib/librte_timer.so.24.1 00:04:47.469 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:47.469 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:47.469 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:47.469 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:47.469 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:47.469 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:47.469 [244/268] Linking target lib/librte_rcu.so.24.1 00:04:47.469 [245/268] Linking target lib/librte_mempool.so.24.1 00:04:47.469 [246/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:47.469 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:47.729 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:47.729 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:47.729 [250/268] Linking target lib/librte_mbuf.so.24.1 00:04:47.729 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:47.987 [252/268] Linking target lib/librte_reorder.so.24.1 00:04:47.987 [253/268] Linking target lib/librte_compressdev.so.24.1 00:04:47.987 [254/268] Linking target lib/librte_net.so.24.1 00:04:47.987 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:04:47.987 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:47.987 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:47.987 [258/268] Linking target lib/librte_hash.so.24.1 00:04:48.245 [259/268] Linking target lib/librte_cmdline.so.24.1 00:04:48.245 [260/268] Linking target lib/librte_ethdev.so.24.1 00:04:48.245 [261/268] Linking target lib/librte_security.so.24.1 00:04:48.245 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:48.245 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:48.505 [264/268] Linking target lib/librte_power.so.24.1 00:04:48.763 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:48.763 [266/268] Linking static target lib/librte_vhost.a 00:04:51.295 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.595 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:51.595 INFO: autodetecting backend as ninja 00:04:51.595 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:13.521 CC lib/log/log.o 00:05:13.521 CC lib/log/log_flags.o 00:05:13.521 CC lib/log/log_deprecated.o 00:05:13.521 CC lib/ut/ut.o 00:05:13.521 CC lib/ut_mock/mock.o 00:05:13.521 LIB libspdk_ut_mock.a 00:05:13.521 LIB libspdk_log.a 00:05:13.521 LIB libspdk_ut.a 00:05:13.521 SO libspdk_ut_mock.so.6.0 00:05:13.521 SO libspdk_log.so.7.1 00:05:13.521 SO libspdk_ut.so.2.0 00:05:13.521 SYMLINK libspdk_ut_mock.so 00:05:13.521 SYMLINK libspdk_log.so 00:05:13.521 SYMLINK libspdk_ut.so 00:05:13.521 CC lib/util/base64.o 00:05:13.521 CC lib/util/bit_array.o 00:05:13.521 CC lib/util/cpuset.o 00:05:13.521 CC lib/util/crc16.o 00:05:13.521 CC lib/util/crc32c.o 00:05:13.521 CC lib/ioat/ioat.o 00:05:13.521 CC lib/util/crc32.o 00:05:13.521 CXX lib/trace_parser/trace.o 00:05:13.521 CC lib/dma/dma.o 00:05:13.521 CC lib/vfio_user/host/vfio_user_pci.o 00:05:13.521 CC lib/util/crc32_ieee.o 00:05:13.521 CC lib/vfio_user/host/vfio_user.o 00:05:13.521 CC lib/util/crc64.o 00:05:13.521 CC lib/util/dif.o 00:05:13.521 CC lib/util/fd.o 00:05:13.521 LIB libspdk_dma.a 00:05:13.521 CC lib/util/fd_group.o 00:05:13.521 CC lib/util/file.o 00:05:13.521 SO libspdk_dma.so.5.0 00:05:13.521 LIB libspdk_ioat.a 00:05:13.521 CC lib/util/hexlify.o 00:05:13.521 SO libspdk_ioat.so.7.0 00:05:13.521 SYMLINK libspdk_dma.so 00:05:13.521 CC lib/util/math.o 00:05:13.521 CC lib/util/iov.o 00:05:13.521 CC lib/util/net.o 00:05:13.521 LIB libspdk_vfio_user.a 00:05:13.521 SYMLINK libspdk_ioat.so 00:05:13.521 CC lib/util/pipe.o 00:05:13.521 SO libspdk_vfio_user.so.5.0 00:05:13.521 CC lib/util/strerror_tls.o 00:05:13.521 CC lib/util/string.o 00:05:13.521 SYMLINK libspdk_vfio_user.so 00:05:13.521 CC lib/util/uuid.o 00:05:13.521 CC lib/util/xor.o 00:05:13.521 CC lib/util/zipf.o 00:05:13.521 CC lib/util/md5.o 00:05:13.521 LIB libspdk_util.a 00:05:13.521 LIB libspdk_trace_parser.a 00:05:13.521 SO libspdk_util.so.10.1 00:05:13.521 SO libspdk_trace_parser.so.6.0 00:05:13.521 SYMLINK libspdk_util.so 00:05:13.521 SYMLINK libspdk_trace_parser.so 00:05:13.521 CC lib/env_dpdk/env.o 00:05:13.521 CC lib/env_dpdk/memory.o 00:05:13.521 CC lib/env_dpdk/init.o 00:05:13.521 CC lib/env_dpdk/pci.o 00:05:13.521 CC lib/env_dpdk/threads.o 00:05:13.521 CC lib/idxd/idxd.o 00:05:13.521 CC lib/conf/conf.o 00:05:13.521 CC lib/rdma_utils/rdma_utils.o 00:05:13.521 CC lib/vmd/vmd.o 00:05:13.521 CC lib/json/json_parse.o 00:05:13.521 CC lib/env_dpdk/pci_ioat.o 00:05:13.521 LIB libspdk_conf.a 00:05:13.521 SO libspdk_conf.so.6.0 00:05:13.521 CC lib/json/json_util.o 00:05:13.521 CC lib/vmd/led.o 00:05:13.521 LIB libspdk_rdma_utils.a 00:05:13.521 SYMLINK libspdk_conf.so 00:05:13.521 CC lib/idxd/idxd_user.o 00:05:13.521 SO libspdk_rdma_utils.so.1.0 00:05:13.522 SYMLINK libspdk_rdma_utils.so 00:05:13.522 CC lib/env_dpdk/pci_virtio.o 00:05:13.522 CC lib/json/json_write.o 00:05:13.522 CC lib/idxd/idxd_kernel.o 00:05:13.843 CC lib/env_dpdk/pci_vmd.o 00:05:13.843 CC lib/env_dpdk/pci_idxd.o 00:05:13.843 CC lib/env_dpdk/pci_event.o 00:05:13.843 CC lib/env_dpdk/sigbus_handler.o 00:05:13.843 CC lib/env_dpdk/pci_dpdk.o 00:05:13.843 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:13.843 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:13.843 LIB libspdk_idxd.a 00:05:13.843 LIB libspdk_json.a 00:05:13.843 SO libspdk_idxd.so.12.1 00:05:13.843 SO libspdk_json.so.6.0 00:05:14.102 LIB libspdk_vmd.a 00:05:14.102 SYMLINK libspdk_json.so 00:05:14.102 SYMLINK libspdk_idxd.so 00:05:14.102 SO libspdk_vmd.so.6.0 00:05:14.102 SYMLINK libspdk_vmd.so 00:05:14.102 CC lib/rdma_provider/common.o 00:05:14.102 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:14.362 CC lib/jsonrpc/jsonrpc_server.o 00:05:14.362 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:14.362 CC lib/jsonrpc/jsonrpc_client.o 00:05:14.362 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:14.362 LIB libspdk_rdma_provider.a 00:05:14.362 SO libspdk_rdma_provider.so.7.0 00:05:14.620 SYMLINK libspdk_rdma_provider.so 00:05:14.620 LIB libspdk_jsonrpc.a 00:05:14.879 SO libspdk_jsonrpc.so.6.0 00:05:14.879 SYMLINK libspdk_jsonrpc.so 00:05:14.879 LIB libspdk_env_dpdk.a 00:05:15.138 SO libspdk_env_dpdk.so.15.1 00:05:15.138 CC lib/rpc/rpc.o 00:05:15.138 SYMLINK libspdk_env_dpdk.so 00:05:15.397 LIB libspdk_rpc.a 00:05:15.397 SO libspdk_rpc.so.6.0 00:05:15.656 SYMLINK libspdk_rpc.so 00:05:15.915 CC lib/trace/trace_flags.o 00:05:15.915 CC lib/trace/trace.o 00:05:15.915 CC lib/trace/trace_rpc.o 00:05:15.915 CC lib/keyring/keyring.o 00:05:15.915 CC lib/notify/notify_rpc.o 00:05:15.915 CC lib/notify/notify.o 00:05:15.915 CC lib/keyring/keyring_rpc.o 00:05:16.175 LIB libspdk_notify.a 00:05:16.175 SO libspdk_notify.so.6.0 00:05:16.175 LIB libspdk_keyring.a 00:05:16.175 LIB libspdk_trace.a 00:05:16.175 SYMLINK libspdk_notify.so 00:05:16.434 SO libspdk_keyring.so.2.0 00:05:16.434 SO libspdk_trace.so.11.0 00:05:16.434 SYMLINK libspdk_keyring.so 00:05:16.434 SYMLINK libspdk_trace.so 00:05:16.692 CC lib/thread/thread.o 00:05:16.692 CC lib/thread/iobuf.o 00:05:16.950 CC lib/sock/sock_rpc.o 00:05:16.950 CC lib/sock/sock.o 00:05:17.209 LIB libspdk_sock.a 00:05:17.468 SO libspdk_sock.so.10.0 00:05:17.468 SYMLINK libspdk_sock.so 00:05:17.728 CC lib/nvme/nvme_ctrlr.o 00:05:17.728 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:17.728 CC lib/nvme/nvme_pcie_common.o 00:05:17.728 CC lib/nvme/nvme_qpair.o 00:05:17.728 CC lib/nvme/nvme.o 00:05:17.728 CC lib/nvme/nvme_fabric.o 00:05:17.728 CC lib/nvme/nvme_ns_cmd.o 00:05:17.728 CC lib/nvme/nvme_pcie.o 00:05:17.728 CC lib/nvme/nvme_ns.o 00:05:18.664 CC lib/nvme/nvme_quirks.o 00:05:18.664 CC lib/nvme/nvme_transport.o 00:05:18.664 CC lib/nvme/nvme_discovery.o 00:05:18.922 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:18.922 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:18.922 LIB libspdk_thread.a 00:05:18.922 CC lib/nvme/nvme_tcp.o 00:05:18.922 SO libspdk_thread.so.11.0 00:05:18.922 SYMLINK libspdk_thread.so 00:05:18.922 CC lib/nvme/nvme_opal.o 00:05:19.181 CC lib/nvme/nvme_io_msg.o 00:05:19.439 CC lib/nvme/nvme_poll_group.o 00:05:19.439 CC lib/nvme/nvme_zns.o 00:05:19.439 CC lib/nvme/nvme_stubs.o 00:05:19.439 CC lib/nvme/nvme_auth.o 00:05:19.697 CC lib/nvme/nvme_cuse.o 00:05:19.697 CC lib/nvme/nvme_rdma.o 00:05:20.263 CC lib/accel/accel.o 00:05:20.263 CC lib/blob/blobstore.o 00:05:20.263 CC lib/init/json_config.o 00:05:20.263 CC lib/virtio/virtio.o 00:05:20.263 CC lib/fsdev/fsdev.o 00:05:20.520 CC lib/fsdev/fsdev_io.o 00:05:20.520 CC lib/init/subsystem.o 00:05:20.778 CC lib/virtio/virtio_vhost_user.o 00:05:20.778 CC lib/blob/request.o 00:05:20.778 CC lib/init/subsystem_rpc.o 00:05:20.778 CC lib/blob/zeroes.o 00:05:21.037 CC lib/init/rpc.o 00:05:21.037 CC lib/blob/blob_bs_dev.o 00:05:21.037 CC lib/fsdev/fsdev_rpc.o 00:05:21.037 CC lib/virtio/virtio_vfio_user.o 00:05:21.037 CC lib/virtio/virtio_pci.o 00:05:21.037 CC lib/accel/accel_rpc.o 00:05:21.037 LIB libspdk_init.a 00:05:21.297 CC lib/accel/accel_sw.o 00:05:21.297 SO libspdk_init.so.6.0 00:05:21.297 LIB libspdk_fsdev.a 00:05:21.297 SYMLINK libspdk_init.so 00:05:21.297 SO libspdk_fsdev.so.2.0 00:05:21.555 SYMLINK libspdk_fsdev.so 00:05:21.556 LIB libspdk_virtio.a 00:05:21.556 CC lib/event/app.o 00:05:21.556 CC lib/event/reactor.o 00:05:21.556 CC lib/event/app_rpc.o 00:05:21.556 CC lib/event/log_rpc.o 00:05:21.556 SO libspdk_virtio.so.7.0 00:05:21.556 CC lib/event/scheduler_static.o 00:05:21.556 LIB libspdk_accel.a 00:05:21.815 LIB libspdk_nvme.a 00:05:21.815 SO libspdk_accel.so.16.0 00:05:21.815 SYMLINK libspdk_virtio.so 00:05:21.815 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:21.815 SYMLINK libspdk_accel.so 00:05:21.815 SO libspdk_nvme.so.15.0 00:05:22.073 CC lib/bdev/bdev_zone.o 00:05:22.073 CC lib/bdev/bdev_rpc.o 00:05:22.073 CC lib/bdev/part.o 00:05:22.073 CC lib/bdev/bdev.o 00:05:22.073 LIB libspdk_event.a 00:05:22.073 CC lib/bdev/scsi_nvme.o 00:05:22.331 SO libspdk_event.so.14.0 00:05:22.331 SYMLINK libspdk_nvme.so 00:05:22.331 SYMLINK libspdk_event.so 00:05:22.589 LIB libspdk_fuse_dispatcher.a 00:05:22.589 SO libspdk_fuse_dispatcher.so.1.0 00:05:22.589 SYMLINK libspdk_fuse_dispatcher.so 00:05:24.491 LIB libspdk_blob.a 00:05:24.792 SO libspdk_blob.so.12.0 00:05:24.792 SYMLINK libspdk_blob.so 00:05:25.356 CC lib/lvol/lvol.o 00:05:25.356 CC lib/blobfs/tree.o 00:05:25.356 CC lib/blobfs/blobfs.o 00:05:25.356 LIB libspdk_bdev.a 00:05:25.614 SO libspdk_bdev.so.17.0 00:05:25.614 SYMLINK libspdk_bdev.so 00:05:25.873 CC lib/nbd/nbd_rpc.o 00:05:25.873 CC lib/nbd/nbd.o 00:05:25.873 CC lib/nvmf/ctrlr.o 00:05:25.873 CC lib/nvmf/ctrlr_discovery.o 00:05:25.873 CC lib/nvmf/ctrlr_bdev.o 00:05:25.873 CC lib/ftl/ftl_core.o 00:05:25.873 CC lib/scsi/dev.o 00:05:25.873 CC lib/ublk/ublk.o 00:05:26.131 CC lib/scsi/lun.o 00:05:26.131 LIB libspdk_blobfs.a 00:05:26.131 SO libspdk_blobfs.so.11.0 00:05:26.131 CC lib/nvmf/subsystem.o 00:05:26.389 SYMLINK libspdk_blobfs.so 00:05:26.389 LIB libspdk_lvol.a 00:05:26.389 CC lib/scsi/port.o 00:05:26.389 SO libspdk_lvol.so.11.0 00:05:26.389 SYMLINK libspdk_lvol.so 00:05:26.389 CC lib/scsi/scsi.o 00:05:26.389 CC lib/ublk/ublk_rpc.o 00:05:26.389 LIB libspdk_nbd.a 00:05:26.389 CC lib/nvmf/nvmf.o 00:05:26.389 SO libspdk_nbd.so.7.0 00:05:26.647 CC lib/nvmf/nvmf_rpc.o 00:05:26.647 CC lib/scsi/scsi_bdev.o 00:05:26.647 SYMLINK libspdk_nbd.so 00:05:26.647 CC lib/ftl/ftl_init.o 00:05:26.647 CC lib/scsi/scsi_pr.o 00:05:26.647 CC lib/scsi/scsi_rpc.o 00:05:26.905 CC lib/nvmf/transport.o 00:05:26.905 CC lib/nvmf/tcp.o 00:05:26.905 CC lib/ftl/ftl_layout.o 00:05:26.905 LIB libspdk_ublk.a 00:05:26.905 SO libspdk_ublk.so.3.0 00:05:27.164 CC lib/scsi/task.o 00:05:27.164 SYMLINK libspdk_ublk.so 00:05:27.164 CC lib/ftl/ftl_debug.o 00:05:27.164 CC lib/nvmf/stubs.o 00:05:27.164 LIB libspdk_scsi.a 00:05:27.164 CC lib/nvmf/mdns_server.o 00:05:27.422 CC lib/ftl/ftl_io.o 00:05:27.422 SO libspdk_scsi.so.9.0 00:05:27.422 SYMLINK libspdk_scsi.so 00:05:27.422 CC lib/nvmf/rdma.o 00:05:27.681 CC lib/nvmf/auth.o 00:05:27.681 CC lib/ftl/ftl_sb.o 00:05:27.681 CC lib/ftl/ftl_l2p.o 00:05:27.681 CC lib/ftl/ftl_l2p_flat.o 00:05:27.939 CC lib/ftl/ftl_nv_cache.o 00:05:27.939 CC lib/iscsi/conn.o 00:05:27.939 CC lib/ftl/ftl_band.o 00:05:27.939 CC lib/vhost/vhost.o 00:05:27.939 CC lib/vhost/vhost_rpc.o 00:05:27.939 CC lib/vhost/vhost_scsi.o 00:05:28.197 CC lib/vhost/vhost_blk.o 00:05:28.466 CC lib/vhost/rte_vhost_user.o 00:05:28.752 CC lib/iscsi/init_grp.o 00:05:28.752 CC lib/iscsi/iscsi.o 00:05:28.752 CC lib/iscsi/param.o 00:05:28.752 CC lib/iscsi/portal_grp.o 00:05:29.017 CC lib/iscsi/tgt_node.o 00:05:29.017 CC lib/iscsi/iscsi_subsystem.o 00:05:29.017 CC lib/ftl/ftl_band_ops.o 00:05:29.276 CC lib/iscsi/iscsi_rpc.o 00:05:29.276 CC lib/iscsi/task.o 00:05:29.276 CC lib/ftl/ftl_writer.o 00:05:29.276 CC lib/ftl/ftl_rq.o 00:05:29.535 CC lib/ftl/ftl_reloc.o 00:05:29.535 CC lib/ftl/ftl_l2p_cache.o 00:05:29.535 CC lib/ftl/ftl_p2l.o 00:05:29.535 CC lib/ftl/ftl_p2l_log.o 00:05:29.535 CC lib/ftl/mngt/ftl_mngt.o 00:05:29.535 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:29.794 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:29.794 LIB libspdk_vhost.a 00:05:29.794 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:29.794 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:29.794 SO libspdk_vhost.so.8.0 00:05:29.794 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:30.053 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:30.053 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:30.053 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:30.053 SYMLINK libspdk_vhost.so 00:05:30.053 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:30.053 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:30.053 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:30.311 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:30.311 CC lib/ftl/utils/ftl_conf.o 00:05:30.311 CC lib/ftl/utils/ftl_md.o 00:05:30.311 CC lib/ftl/utils/ftl_mempool.o 00:05:30.311 CC lib/ftl/utils/ftl_bitmap.o 00:05:30.311 CC lib/ftl/utils/ftl_property.o 00:05:30.311 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:30.311 LIB libspdk_nvmf.a 00:05:30.311 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:30.569 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:30.569 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:30.569 SO libspdk_nvmf.so.20.0 00:05:30.569 LIB libspdk_iscsi.a 00:05:30.569 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:30.569 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:30.828 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:30.828 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:30.828 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:30.828 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:30.828 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:30.828 SO libspdk_iscsi.so.8.0 00:05:30.828 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:30.828 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:30.828 SYMLINK libspdk_nvmf.so 00:05:30.828 CC lib/ftl/base/ftl_base_dev.o 00:05:30.828 CC lib/ftl/base/ftl_base_bdev.o 00:05:30.828 CC lib/ftl/ftl_trace.o 00:05:31.086 SYMLINK libspdk_iscsi.so 00:05:31.344 LIB libspdk_ftl.a 00:05:31.601 SO libspdk_ftl.so.9.0 00:05:31.859 SYMLINK libspdk_ftl.so 00:05:32.425 CC module/env_dpdk/env_dpdk_rpc.o 00:05:32.425 CC module/accel/error/accel_error.o 00:05:32.425 CC module/keyring/file/keyring.o 00:05:32.425 CC module/blob/bdev/blob_bdev.o 00:05:32.425 CC module/accel/ioat/accel_ioat.o 00:05:32.425 CC module/keyring/linux/keyring.o 00:05:32.425 CC module/fsdev/aio/fsdev_aio.o 00:05:32.425 CC module/sock/posix/posix.o 00:05:32.425 CC module/accel/dsa/accel_dsa.o 00:05:32.425 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:32.682 LIB libspdk_env_dpdk_rpc.a 00:05:32.682 SO libspdk_env_dpdk_rpc.so.6.0 00:05:32.682 CC module/keyring/linux/keyring_rpc.o 00:05:32.682 CC module/keyring/file/keyring_rpc.o 00:05:32.682 CC module/accel/ioat/accel_ioat_rpc.o 00:05:32.682 SYMLINK libspdk_env_dpdk_rpc.so 00:05:32.682 CC module/accel/error/accel_error_rpc.o 00:05:32.682 LIB libspdk_scheduler_dynamic.a 00:05:32.682 SO libspdk_scheduler_dynamic.so.4.0 00:05:32.682 LIB libspdk_blob_bdev.a 00:05:32.950 LIB libspdk_keyring_file.a 00:05:32.950 SO libspdk_blob_bdev.so.12.0 00:05:32.950 SYMLINK libspdk_scheduler_dynamic.so 00:05:32.950 CC module/accel/dsa/accel_dsa_rpc.o 00:05:32.950 LIB libspdk_keyring_linux.a 00:05:32.950 LIB libspdk_accel_ioat.a 00:05:32.950 SO libspdk_keyring_file.so.2.0 00:05:32.950 LIB libspdk_accel_error.a 00:05:32.950 SO libspdk_keyring_linux.so.1.0 00:05:32.950 SYMLINK libspdk_blob_bdev.so 00:05:32.950 SO libspdk_accel_ioat.so.6.0 00:05:32.950 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:32.950 SO libspdk_accel_error.so.2.0 00:05:32.950 SYMLINK libspdk_keyring_file.so 00:05:32.950 SYMLINK libspdk_accel_ioat.so 00:05:32.950 SYMLINK libspdk_keyring_linux.so 00:05:32.950 CC module/fsdev/aio/linux_aio_mgr.o 00:05:32.950 LIB libspdk_accel_dsa.a 00:05:33.241 SYMLINK libspdk_accel_error.so 00:05:33.241 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:33.241 CC module/scheduler/gscheduler/gscheduler.o 00:05:33.241 SO libspdk_accel_dsa.so.5.0 00:05:33.241 SYMLINK libspdk_accel_dsa.so 00:05:33.241 LIB libspdk_scheduler_dpdk_governor.a 00:05:33.241 CC module/accel/iaa/accel_iaa.o 00:05:33.241 LIB libspdk_scheduler_gscheduler.a 00:05:33.241 SO libspdk_scheduler_gscheduler.so.4.0 00:05:33.241 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:33.241 CC module/bdev/delay/vbdev_delay.o 00:05:33.241 CC module/blobfs/bdev/blobfs_bdev.o 00:05:33.241 CC module/bdev/error/vbdev_error.o 00:05:33.241 LIB libspdk_fsdev_aio.a 00:05:33.241 SYMLINK libspdk_scheduler_gscheduler.so 00:05:33.241 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:33.241 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:33.241 CC module/accel/iaa/accel_iaa_rpc.o 00:05:33.498 CC module/bdev/gpt/gpt.o 00:05:33.498 SO libspdk_fsdev_aio.so.1.0 00:05:33.498 LIB libspdk_sock_posix.a 00:05:33.498 SO libspdk_sock_posix.so.6.0 00:05:33.498 CC module/bdev/lvol/vbdev_lvol.o 00:05:33.498 SYMLINK libspdk_fsdev_aio.so 00:05:33.498 CC module/bdev/error/vbdev_error_rpc.o 00:05:33.498 CC module/bdev/gpt/vbdev_gpt.o 00:05:33.498 LIB libspdk_accel_iaa.a 00:05:33.498 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:33.498 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:33.498 SYMLINK libspdk_sock_posix.so 00:05:33.498 SO libspdk_accel_iaa.so.3.0 00:05:33.498 SYMLINK libspdk_accel_iaa.so 00:05:33.756 LIB libspdk_bdev_error.a 00:05:33.756 LIB libspdk_bdev_delay.a 00:05:33.756 LIB libspdk_blobfs_bdev.a 00:05:33.756 SO libspdk_bdev_error.so.6.0 00:05:33.756 CC module/bdev/malloc/bdev_malloc.o 00:05:33.756 SO libspdk_blobfs_bdev.so.6.0 00:05:33.756 SO libspdk_bdev_delay.so.6.0 00:05:33.756 CC module/bdev/null/bdev_null.o 00:05:33.756 CC module/bdev/passthru/vbdev_passthru.o 00:05:33.756 CC module/bdev/nvme/bdev_nvme.o 00:05:33.756 SYMLINK libspdk_bdev_error.so 00:05:33.756 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:33.756 SYMLINK libspdk_blobfs_bdev.so 00:05:33.756 CC module/bdev/null/bdev_null_rpc.o 00:05:33.756 LIB libspdk_bdev_gpt.a 00:05:33.756 SYMLINK libspdk_bdev_delay.so 00:05:34.014 SO libspdk_bdev_gpt.so.6.0 00:05:34.014 SYMLINK libspdk_bdev_gpt.so 00:05:34.014 LIB libspdk_bdev_lvol.a 00:05:34.014 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:34.014 SO libspdk_bdev_lvol.so.6.0 00:05:34.014 CC module/bdev/raid/bdev_raid.o 00:05:34.014 LIB libspdk_bdev_null.a 00:05:34.014 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:34.273 SO libspdk_bdev_null.so.6.0 00:05:34.273 SYMLINK libspdk_bdev_lvol.so 00:05:34.273 CC module/bdev/split/vbdev_split.o 00:05:34.273 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:34.273 SYMLINK libspdk_bdev_null.so 00:05:34.273 LIB libspdk_bdev_malloc.a 00:05:34.273 SO libspdk_bdev_malloc.so.6.0 00:05:34.273 LIB libspdk_bdev_passthru.a 00:05:34.273 CC module/bdev/xnvme/bdev_xnvme.o 00:05:34.273 SO libspdk_bdev_passthru.so.6.0 00:05:34.273 CC module/bdev/aio/bdev_aio.o 00:05:34.273 SYMLINK libspdk_bdev_malloc.so 00:05:34.273 CC module/bdev/aio/bdev_aio_rpc.o 00:05:34.531 CC module/bdev/ftl/bdev_ftl.o 00:05:34.531 CC module/bdev/split/vbdev_split_rpc.o 00:05:34.531 SYMLINK libspdk_bdev_passthru.so 00:05:34.531 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:34.531 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:34.531 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:05:34.531 LIB libspdk_bdev_split.a 00:05:34.791 SO libspdk_bdev_split.so.6.0 00:05:34.791 CC module/bdev/raid/bdev_raid_rpc.o 00:05:34.791 CC module/bdev/raid/bdev_raid_sb.o 00:05:34.791 LIB libspdk_bdev_zone_block.a 00:05:34.791 LIB libspdk_bdev_ftl.a 00:05:34.791 SYMLINK libspdk_bdev_split.so 00:05:34.791 LIB libspdk_bdev_aio.a 00:05:34.791 LIB libspdk_bdev_xnvme.a 00:05:34.791 SO libspdk_bdev_zone_block.so.6.0 00:05:34.791 SO libspdk_bdev_ftl.so.6.0 00:05:34.791 SO libspdk_bdev_aio.so.6.0 00:05:34.791 SO libspdk_bdev_xnvme.so.3.0 00:05:34.791 CC module/bdev/iscsi/bdev_iscsi.o 00:05:34.791 SYMLINK libspdk_bdev_aio.so 00:05:34.791 SYMLINK libspdk_bdev_zone_block.so 00:05:34.791 SYMLINK libspdk_bdev_ftl.so 00:05:34.791 CC module/bdev/nvme/nvme_rpc.o 00:05:34.791 CC module/bdev/nvme/bdev_mdns_client.o 00:05:34.791 CC module/bdev/raid/raid0.o 00:05:34.791 SYMLINK libspdk_bdev_xnvme.so 00:05:34.791 CC module/bdev/raid/raid1.o 00:05:35.050 CC module/bdev/raid/concat.o 00:05:35.050 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:35.050 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:35.050 CC module/bdev/nvme/vbdev_opal.o 00:05:35.050 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:35.309 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:35.309 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:35.309 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:35.573 LIB libspdk_bdev_raid.a 00:05:35.573 LIB libspdk_bdev_iscsi.a 00:05:35.573 LIB libspdk_bdev_virtio.a 00:05:35.573 SO libspdk_bdev_iscsi.so.6.0 00:05:35.573 SO libspdk_bdev_raid.so.6.0 00:05:35.573 SO libspdk_bdev_virtio.so.6.0 00:05:35.573 SYMLINK libspdk_bdev_iscsi.so 00:05:35.573 SYMLINK libspdk_bdev_raid.so 00:05:35.831 SYMLINK libspdk_bdev_virtio.so 00:05:37.210 LIB libspdk_bdev_nvme.a 00:05:37.210 SO libspdk_bdev_nvme.so.7.1 00:05:37.210 SYMLINK libspdk_bdev_nvme.so 00:05:37.778 CC module/event/subsystems/fsdev/fsdev.o 00:05:37.778 CC module/event/subsystems/scheduler/scheduler.o 00:05:37.778 CC module/event/subsystems/vmd/vmd.o 00:05:37.778 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:37.778 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:37.778 CC module/event/subsystems/iobuf/iobuf.o 00:05:37.778 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:37.778 CC module/event/subsystems/keyring/keyring.o 00:05:37.778 CC module/event/subsystems/sock/sock.o 00:05:38.081 LIB libspdk_event_keyring.a 00:05:38.081 LIB libspdk_event_vmd.a 00:05:38.081 LIB libspdk_event_scheduler.a 00:05:38.081 LIB libspdk_event_fsdev.a 00:05:38.081 LIB libspdk_event_vhost_blk.a 00:05:38.081 LIB libspdk_event_iobuf.a 00:05:38.081 SO libspdk_event_keyring.so.1.0 00:05:38.081 SO libspdk_event_scheduler.so.4.0 00:05:38.081 SO libspdk_event_vmd.so.6.0 00:05:38.081 SO libspdk_event_fsdev.so.1.0 00:05:38.081 SO libspdk_event_vhost_blk.so.3.0 00:05:38.081 SO libspdk_event_iobuf.so.3.0 00:05:38.081 LIB libspdk_event_sock.a 00:05:38.081 SYMLINK libspdk_event_keyring.so 00:05:38.081 SO libspdk_event_sock.so.5.0 00:05:38.081 SYMLINK libspdk_event_fsdev.so 00:05:38.081 SYMLINK libspdk_event_scheduler.so 00:05:38.081 SYMLINK libspdk_event_vhost_blk.so 00:05:38.081 SYMLINK libspdk_event_vmd.so 00:05:38.081 SYMLINK libspdk_event_iobuf.so 00:05:38.081 SYMLINK libspdk_event_sock.so 00:05:38.649 CC module/event/subsystems/accel/accel.o 00:05:38.649 LIB libspdk_event_accel.a 00:05:38.649 SO libspdk_event_accel.so.6.0 00:05:38.909 SYMLINK libspdk_event_accel.so 00:05:39.168 CC module/event/subsystems/bdev/bdev.o 00:05:39.427 LIB libspdk_event_bdev.a 00:05:39.427 SO libspdk_event_bdev.so.6.0 00:05:39.686 SYMLINK libspdk_event_bdev.so 00:05:39.945 CC module/event/subsystems/scsi/scsi.o 00:05:39.945 CC module/event/subsystems/nbd/nbd.o 00:05:39.945 CC module/event/subsystems/ublk/ublk.o 00:05:39.945 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:39.945 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:39.945 LIB libspdk_event_ublk.a 00:05:39.945 LIB libspdk_event_scsi.a 00:05:40.204 LIB libspdk_event_nbd.a 00:05:40.204 SO libspdk_event_ublk.so.3.0 00:05:40.204 SO libspdk_event_scsi.so.6.0 00:05:40.204 SO libspdk_event_nbd.so.6.0 00:05:40.204 SYMLINK libspdk_event_ublk.so 00:05:40.204 LIB libspdk_event_nvmf.a 00:05:40.204 SYMLINK libspdk_event_scsi.so 00:05:40.204 SYMLINK libspdk_event_nbd.so 00:05:40.204 SO libspdk_event_nvmf.so.6.0 00:05:40.204 SYMLINK libspdk_event_nvmf.so 00:05:40.463 CC module/event/subsystems/iscsi/iscsi.o 00:05:40.463 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:40.722 LIB libspdk_event_vhost_scsi.a 00:05:40.722 SO libspdk_event_vhost_scsi.so.3.0 00:05:40.722 LIB libspdk_event_iscsi.a 00:05:40.722 SO libspdk_event_iscsi.so.6.0 00:05:40.722 SYMLINK libspdk_event_vhost_scsi.so 00:05:40.981 SYMLINK libspdk_event_iscsi.so 00:05:41.240 SO libspdk.so.6.0 00:05:41.240 SYMLINK libspdk.so 00:05:41.500 TEST_HEADER include/spdk/accel.h 00:05:41.500 TEST_HEADER include/spdk/accel_module.h 00:05:41.500 TEST_HEADER include/spdk/assert.h 00:05:41.500 TEST_HEADER include/spdk/barrier.h 00:05:41.500 TEST_HEADER include/spdk/base64.h 00:05:41.500 TEST_HEADER include/spdk/bdev.h 00:05:41.500 CC test/rpc_client/rpc_client_test.o 00:05:41.500 TEST_HEADER include/spdk/bdev_module.h 00:05:41.500 TEST_HEADER include/spdk/bdev_zone.h 00:05:41.500 CXX app/trace/trace.o 00:05:41.500 TEST_HEADER include/spdk/bit_array.h 00:05:41.500 TEST_HEADER include/spdk/bit_pool.h 00:05:41.500 TEST_HEADER include/spdk/blob_bdev.h 00:05:41.500 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:41.500 TEST_HEADER include/spdk/blobfs.h 00:05:41.500 TEST_HEADER include/spdk/blob.h 00:05:41.500 TEST_HEADER include/spdk/conf.h 00:05:41.500 TEST_HEADER include/spdk/config.h 00:05:41.500 TEST_HEADER include/spdk/cpuset.h 00:05:41.500 TEST_HEADER include/spdk/crc16.h 00:05:41.500 TEST_HEADER include/spdk/crc32.h 00:05:41.500 TEST_HEADER include/spdk/crc64.h 00:05:41.500 TEST_HEADER include/spdk/dif.h 00:05:41.500 TEST_HEADER include/spdk/dma.h 00:05:41.500 TEST_HEADER include/spdk/endian.h 00:05:41.500 TEST_HEADER include/spdk/env_dpdk.h 00:05:41.500 TEST_HEADER include/spdk/env.h 00:05:41.500 TEST_HEADER include/spdk/event.h 00:05:41.500 TEST_HEADER include/spdk/fd_group.h 00:05:41.500 TEST_HEADER include/spdk/fd.h 00:05:41.500 TEST_HEADER include/spdk/file.h 00:05:41.500 TEST_HEADER include/spdk/fsdev.h 00:05:41.500 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:41.500 TEST_HEADER include/spdk/fsdev_module.h 00:05:41.500 TEST_HEADER include/spdk/ftl.h 00:05:41.500 TEST_HEADER include/spdk/gpt_spec.h 00:05:41.500 TEST_HEADER include/spdk/hexlify.h 00:05:41.500 TEST_HEADER include/spdk/histogram_data.h 00:05:41.500 TEST_HEADER include/spdk/idxd.h 00:05:41.500 TEST_HEADER include/spdk/idxd_spec.h 00:05:41.500 TEST_HEADER include/spdk/init.h 00:05:41.500 CC examples/ioat/perf/perf.o 00:05:41.500 TEST_HEADER include/spdk/ioat.h 00:05:41.500 TEST_HEADER include/spdk/ioat_spec.h 00:05:41.500 CC examples/util/zipf/zipf.o 00:05:41.500 TEST_HEADER include/spdk/iscsi_spec.h 00:05:41.500 TEST_HEADER include/spdk/json.h 00:05:41.500 TEST_HEADER include/spdk/jsonrpc.h 00:05:41.500 TEST_HEADER include/spdk/keyring.h 00:05:41.500 TEST_HEADER include/spdk/keyring_module.h 00:05:41.500 TEST_HEADER include/spdk/likely.h 00:05:41.500 TEST_HEADER include/spdk/log.h 00:05:41.500 TEST_HEADER include/spdk/lvol.h 00:05:41.500 TEST_HEADER include/spdk/md5.h 00:05:41.500 CC test/thread/poller_perf/poller_perf.o 00:05:41.500 TEST_HEADER include/spdk/memory.h 00:05:41.500 TEST_HEADER include/spdk/mmio.h 00:05:41.500 TEST_HEADER include/spdk/nbd.h 00:05:41.500 TEST_HEADER include/spdk/net.h 00:05:41.500 TEST_HEADER include/spdk/notify.h 00:05:41.500 TEST_HEADER include/spdk/nvme.h 00:05:41.500 TEST_HEADER include/spdk/nvme_intel.h 00:05:41.500 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:41.500 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:41.500 TEST_HEADER include/spdk/nvme_spec.h 00:05:41.500 TEST_HEADER include/spdk/nvme_zns.h 00:05:41.500 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:41.500 CC test/app/bdev_svc/bdev_svc.o 00:05:41.500 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:41.500 CC test/dma/test_dma/test_dma.o 00:05:41.500 TEST_HEADER include/spdk/nvmf.h 00:05:41.500 TEST_HEADER include/spdk/nvmf_spec.h 00:05:41.500 TEST_HEADER include/spdk/nvmf_transport.h 00:05:41.500 TEST_HEADER include/spdk/opal.h 00:05:41.759 TEST_HEADER include/spdk/opal_spec.h 00:05:41.759 TEST_HEADER include/spdk/pci_ids.h 00:05:41.759 TEST_HEADER include/spdk/pipe.h 00:05:41.759 TEST_HEADER include/spdk/queue.h 00:05:41.759 TEST_HEADER include/spdk/reduce.h 00:05:41.759 TEST_HEADER include/spdk/rpc.h 00:05:41.759 TEST_HEADER include/spdk/scheduler.h 00:05:41.759 TEST_HEADER include/spdk/scsi.h 00:05:41.759 TEST_HEADER include/spdk/scsi_spec.h 00:05:41.759 TEST_HEADER include/spdk/sock.h 00:05:41.759 CC test/env/mem_callbacks/mem_callbacks.o 00:05:41.759 TEST_HEADER include/spdk/stdinc.h 00:05:41.759 TEST_HEADER include/spdk/string.h 00:05:41.759 LINK rpc_client_test 00:05:41.759 TEST_HEADER include/spdk/thread.h 00:05:41.759 TEST_HEADER include/spdk/trace.h 00:05:41.759 TEST_HEADER include/spdk/trace_parser.h 00:05:41.759 TEST_HEADER include/spdk/tree.h 00:05:41.759 TEST_HEADER include/spdk/ublk.h 00:05:41.759 TEST_HEADER include/spdk/util.h 00:05:41.759 TEST_HEADER include/spdk/uuid.h 00:05:41.759 TEST_HEADER include/spdk/version.h 00:05:41.759 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:41.759 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:41.759 TEST_HEADER include/spdk/vhost.h 00:05:41.759 TEST_HEADER include/spdk/vmd.h 00:05:41.759 TEST_HEADER include/spdk/xor.h 00:05:41.760 TEST_HEADER include/spdk/zipf.h 00:05:41.760 CXX test/cpp_headers/accel.o 00:05:41.760 LINK zipf 00:05:41.760 LINK poller_perf 00:05:41.760 LINK interrupt_tgt 00:05:41.760 LINK ioat_perf 00:05:41.760 LINK bdev_svc 00:05:41.760 CXX test/cpp_headers/accel_module.o 00:05:42.019 LINK spdk_trace 00:05:42.019 CC test/app/histogram_perf/histogram_perf.o 00:05:42.019 CC examples/ioat/verify/verify.o 00:05:42.019 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:42.019 CC examples/sock/hello_world/hello_sock.o 00:05:42.019 CXX test/cpp_headers/assert.o 00:05:42.019 CXX test/cpp_headers/barrier.o 00:05:42.019 CC examples/thread/thread/thread_ex.o 00:05:42.277 LINK test_dma 00:05:42.277 CC app/trace_record/trace_record.o 00:05:42.277 LINK histogram_perf 00:05:42.277 LINK mem_callbacks 00:05:42.277 CXX test/cpp_headers/base64.o 00:05:42.277 LINK verify 00:05:42.277 CXX test/cpp_headers/bdev.o 00:05:42.277 LINK hello_sock 00:05:42.277 LINK thread 00:05:42.277 CC app/nvmf_tgt/nvmf_main.o 00:05:42.536 CC test/env/vtophys/vtophys.o 00:05:42.536 LINK spdk_trace_record 00:05:42.536 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:42.536 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:42.536 CXX test/cpp_headers/bdev_module.o 00:05:42.536 LINK nvme_fuzz 00:05:42.536 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:42.536 LINK nvmf_tgt 00:05:42.536 CC examples/vmd/lsvmd/lsvmd.o 00:05:42.536 LINK vtophys 00:05:42.536 CC app/iscsi_tgt/iscsi_tgt.o 00:05:42.864 CXX test/cpp_headers/bdev_zone.o 00:05:42.864 CC examples/idxd/perf/perf.o 00:05:42.864 LINK lsvmd 00:05:42.864 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:42.864 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:42.864 CC examples/accel/perf/accel_perf.o 00:05:42.864 CC test/env/memory/memory_ut.o 00:05:42.864 LINK iscsi_tgt 00:05:42.864 CXX test/cpp_headers/bit_array.o 00:05:43.123 LINK vhost_fuzz 00:05:43.123 LINK env_dpdk_post_init 00:05:43.123 CC examples/vmd/led/led.o 00:05:43.123 CXX test/cpp_headers/bit_pool.o 00:05:43.123 LINK hello_fsdev 00:05:43.123 LINK idxd_perf 00:05:43.123 LINK led 00:05:43.382 CXX test/cpp_headers/blob_bdev.o 00:05:43.382 CC app/spdk_tgt/spdk_tgt.o 00:05:43.382 CC app/spdk_lspci/spdk_lspci.o 00:05:43.382 CC test/app/jsoncat/jsoncat.o 00:05:43.382 CXX test/cpp_headers/blobfs_bdev.o 00:05:43.382 LINK spdk_lspci 00:05:43.382 CXX test/cpp_headers/blobfs.o 00:05:43.382 CC test/app/stub/stub.o 00:05:43.382 LINK accel_perf 00:05:43.382 LINK jsoncat 00:05:43.382 LINK spdk_tgt 00:05:43.641 CXX test/cpp_headers/blob.o 00:05:43.641 CXX test/cpp_headers/conf.o 00:05:43.641 CXX test/cpp_headers/config.o 00:05:43.641 CC test/event/event_perf/event_perf.o 00:05:43.641 LINK stub 00:05:43.641 CC test/event/reactor/reactor.o 00:05:43.641 CC test/event/reactor_perf/reactor_perf.o 00:05:43.900 LINK event_perf 00:05:43.900 CXX test/cpp_headers/cpuset.o 00:05:43.900 CC app/spdk_nvme_perf/perf.o 00:05:43.900 CXX test/cpp_headers/crc16.o 00:05:43.900 CC examples/blob/hello_world/hello_blob.o 00:05:43.900 LINK reactor 00:05:43.900 CC examples/blob/cli/blobcli.o 00:05:43.900 LINK reactor_perf 00:05:43.900 CXX test/cpp_headers/crc32.o 00:05:44.159 LINK hello_blob 00:05:44.159 LINK memory_ut 00:05:44.159 CC test/nvme/aer/aer.o 00:05:44.159 CXX test/cpp_headers/crc64.o 00:05:44.159 CC test/event/app_repeat/app_repeat.o 00:05:44.159 CC test/blobfs/mkfs/mkfs.o 00:05:44.418 CXX test/cpp_headers/dif.o 00:05:44.418 CC test/accel/dif/dif.o 00:05:44.418 LINK app_repeat 00:05:44.418 CC test/env/pci/pci_ut.o 00:05:44.418 LINK blobcli 00:05:44.418 LINK aer 00:05:44.418 CXX test/cpp_headers/dma.o 00:05:44.418 LINK mkfs 00:05:44.418 LINK iscsi_fuzz 00:05:44.678 CC test/lvol/esnap/esnap.o 00:05:44.678 CXX test/cpp_headers/endian.o 00:05:44.678 CC test/event/scheduler/scheduler.o 00:05:44.678 CXX test/cpp_headers/env_dpdk.o 00:05:44.678 CXX test/cpp_headers/env.o 00:05:44.678 CC test/nvme/reset/reset.o 00:05:44.937 LINK spdk_nvme_perf 00:05:44.937 LINK pci_ut 00:05:44.937 CC examples/nvme/hello_world/hello_world.o 00:05:44.937 CXX test/cpp_headers/event.o 00:05:44.937 LINK scheduler 00:05:44.937 CC app/spdk_nvme_identify/identify.o 00:05:44.937 CC app/spdk_nvme_discover/discovery_aer.o 00:05:44.937 LINK reset 00:05:45.197 CXX test/cpp_headers/fd_group.o 00:05:45.197 CC app/spdk_top/spdk_top.o 00:05:45.197 LINK dif 00:05:45.197 LINK hello_world 00:05:45.197 CC test/nvme/sgl/sgl.o 00:05:45.197 LINK spdk_nvme_discover 00:05:45.197 CXX test/cpp_headers/fd.o 00:05:45.197 CC test/nvme/e2edp/nvme_dp.o 00:05:45.456 CC app/vhost/vhost.o 00:05:45.457 CXX test/cpp_headers/file.o 00:05:45.457 CC examples/nvme/reconnect/reconnect.o 00:05:45.457 CC app/spdk_dd/spdk_dd.o 00:05:45.457 CC test/nvme/overhead/overhead.o 00:05:45.457 LINK sgl 00:05:45.457 LINK nvme_dp 00:05:45.716 LINK vhost 00:05:45.716 CXX test/cpp_headers/fsdev.o 00:05:45.716 CXX test/cpp_headers/fsdev_module.o 00:05:45.716 CXX test/cpp_headers/ftl.o 00:05:45.975 LINK overhead 00:05:45.975 LINK reconnect 00:05:45.975 LINK spdk_dd 00:05:45.975 CC app/fio/nvme/fio_plugin.o 00:05:45.975 CXX test/cpp_headers/gpt_spec.o 00:05:45.975 CC test/nvme/err_injection/err_injection.o 00:05:45.975 LINK spdk_nvme_identify 00:05:45.975 CC test/nvme/startup/startup.o 00:05:45.975 CXX test/cpp_headers/hexlify.o 00:05:46.234 LINK spdk_top 00:05:46.234 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:46.234 LINK err_injection 00:05:46.234 CXX test/cpp_headers/histogram_data.o 00:05:46.234 LINK startup 00:05:46.234 CC app/fio/bdev/fio_plugin.o 00:05:46.234 CC examples/bdev/hello_world/hello_bdev.o 00:05:46.493 CC examples/bdev/bdevperf/bdevperf.o 00:05:46.493 CXX test/cpp_headers/idxd.o 00:05:46.493 CC examples/nvme/arbitration/arbitration.o 00:05:46.493 CC test/nvme/reserve/reserve.o 00:05:46.493 CC test/bdev/bdevio/bdevio.o 00:05:46.493 CXX test/cpp_headers/idxd_spec.o 00:05:46.493 LINK spdk_nvme 00:05:46.493 LINK hello_bdev 00:05:46.753 CXX test/cpp_headers/init.o 00:05:46.753 LINK reserve 00:05:46.753 LINK nvme_manage 00:05:46.753 CC examples/nvme/hotplug/hotplug.o 00:05:46.753 LINK spdk_bdev 00:05:46.753 CXX test/cpp_headers/ioat.o 00:05:46.753 CXX test/cpp_headers/ioat_spec.o 00:05:46.753 LINK arbitration 00:05:47.012 LINK bdevio 00:05:47.012 CXX test/cpp_headers/iscsi_spec.o 00:05:47.012 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:47.012 CC test/nvme/simple_copy/simple_copy.o 00:05:47.012 LINK hotplug 00:05:47.291 CC test/nvme/connect_stress/connect_stress.o 00:05:47.291 CC examples/nvme/abort/abort.o 00:05:47.291 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:47.291 LINK cmb_copy 00:05:47.291 CXX test/cpp_headers/json.o 00:05:47.291 CXX test/cpp_headers/jsonrpc.o 00:05:47.291 CC test/nvme/boot_partition/boot_partition.o 00:05:47.291 LINK bdevperf 00:05:47.291 LINK simple_copy 00:05:47.291 LINK connect_stress 00:05:47.571 LINK pmr_persistence 00:05:47.571 CXX test/cpp_headers/keyring.o 00:05:47.571 CXX test/cpp_headers/keyring_module.o 00:05:47.571 LINK boot_partition 00:05:47.571 CC test/nvme/compliance/nvme_compliance.o 00:05:47.571 LINK abort 00:05:47.571 CXX test/cpp_headers/likely.o 00:05:47.830 CC test/nvme/fused_ordering/fused_ordering.o 00:05:47.830 CXX test/cpp_headers/log.o 00:05:47.830 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:47.830 CXX test/cpp_headers/lvol.o 00:05:47.830 CC test/nvme/cuse/cuse.o 00:05:47.830 CC test/nvme/fdp/fdp.o 00:05:47.830 CXX test/cpp_headers/md5.o 00:05:47.830 CXX test/cpp_headers/memory.o 00:05:47.830 CXX test/cpp_headers/mmio.o 00:05:48.094 LINK fused_ordering 00:05:48.094 LINK doorbell_aers 00:05:48.094 CXX test/cpp_headers/nbd.o 00:05:48.094 LINK nvme_compliance 00:05:48.094 CC examples/nvmf/nvmf/nvmf.o 00:05:48.094 CXX test/cpp_headers/net.o 00:05:48.094 CXX test/cpp_headers/notify.o 00:05:48.094 CXX test/cpp_headers/nvme.o 00:05:48.094 CXX test/cpp_headers/nvme_intel.o 00:05:48.094 CXX test/cpp_headers/nvme_ocssd.o 00:05:48.094 LINK fdp 00:05:48.359 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:48.359 CXX test/cpp_headers/nvme_spec.o 00:05:48.359 CXX test/cpp_headers/nvme_zns.o 00:05:48.359 CXX test/cpp_headers/nvmf_cmd.o 00:05:48.359 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:48.359 CXX test/cpp_headers/nvmf.o 00:05:48.359 CXX test/cpp_headers/nvmf_spec.o 00:05:48.618 CXX test/cpp_headers/nvmf_transport.o 00:05:48.618 CXX test/cpp_headers/opal.o 00:05:48.618 LINK nvmf 00:05:48.618 CXX test/cpp_headers/opal_spec.o 00:05:48.618 CXX test/cpp_headers/pci_ids.o 00:05:48.618 CXX test/cpp_headers/pipe.o 00:05:48.618 CXX test/cpp_headers/queue.o 00:05:48.618 CXX test/cpp_headers/reduce.o 00:05:48.618 CXX test/cpp_headers/rpc.o 00:05:48.877 CXX test/cpp_headers/scheduler.o 00:05:48.877 CXX test/cpp_headers/scsi.o 00:05:48.877 CXX test/cpp_headers/scsi_spec.o 00:05:48.877 CXX test/cpp_headers/sock.o 00:05:48.877 CXX test/cpp_headers/stdinc.o 00:05:48.877 CXX test/cpp_headers/string.o 00:05:48.877 CXX test/cpp_headers/thread.o 00:05:48.877 CXX test/cpp_headers/trace.o 00:05:48.877 CXX test/cpp_headers/trace_parser.o 00:05:48.877 CXX test/cpp_headers/tree.o 00:05:48.877 CXX test/cpp_headers/ublk.o 00:05:49.136 CXX test/cpp_headers/uuid.o 00:05:49.136 CXX test/cpp_headers/util.o 00:05:49.136 CXX test/cpp_headers/version.o 00:05:49.136 CXX test/cpp_headers/vfio_user_pci.o 00:05:49.136 CXX test/cpp_headers/vfio_user_spec.o 00:05:49.136 CXX test/cpp_headers/vhost.o 00:05:49.136 CXX test/cpp_headers/vmd.o 00:05:49.136 CXX test/cpp_headers/xor.o 00:05:49.136 CXX test/cpp_headers/zipf.o 00:05:49.136 LINK cuse 00:05:51.042 LINK esnap 00:05:51.610 00:05:51.610 real 1m34.549s 00:05:51.610 user 8m14.014s 00:05:51.610 sys 1m58.141s 00:05:51.610 11:13:18 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:51.610 ************************************ 00:05:51.610 END TEST make 00:05:51.610 ************************************ 00:05:51.610 11:13:18 make -- common/autotest_common.sh@10 -- $ set +x 00:05:51.610 11:13:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:51.610 11:13:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:51.610 11:13:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:51.610 11:13:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:51.610 11:13:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:51.610 11:13:18 -- pm/common@44 -- $ pid=5293 00:05:51.610 11:13:18 -- pm/common@50 -- $ kill -TERM 5293 00:05:51.610 11:13:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:51.610 11:13:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:51.610 11:13:18 -- pm/common@44 -- $ pid=5295 00:05:51.610 11:13:18 -- pm/common@50 -- $ kill -TERM 5295 00:05:51.610 11:13:18 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:51.610 11:13:18 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:51.610 11:13:18 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:51.610 11:13:18 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:51.610 11:13:18 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.870 11:13:18 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.870 11:13:18 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.870 11:13:18 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.870 11:13:18 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.870 11:13:18 -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.870 11:13:18 -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.870 11:13:18 -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.870 11:13:18 -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.870 11:13:18 -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.870 11:13:18 -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.870 11:13:18 -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.870 11:13:18 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.870 11:13:18 -- scripts/common.sh@344 -- # case "$op" in 00:05:51.870 11:13:18 -- scripts/common.sh@345 -- # : 1 00:05:51.870 11:13:18 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.870 11:13:18 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.870 11:13:18 -- scripts/common.sh@365 -- # decimal 1 00:05:51.870 11:13:18 -- scripts/common.sh@353 -- # local d=1 00:05:51.870 11:13:18 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.870 11:13:18 -- scripts/common.sh@355 -- # echo 1 00:05:51.870 11:13:18 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.870 11:13:18 -- scripts/common.sh@366 -- # decimal 2 00:05:51.870 11:13:18 -- scripts/common.sh@353 -- # local d=2 00:05:51.870 11:13:18 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.870 11:13:18 -- scripts/common.sh@355 -- # echo 2 00:05:51.870 11:13:18 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.870 11:13:18 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.870 11:13:18 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.870 11:13:18 -- scripts/common.sh@368 -- # return 0 00:05:51.870 11:13:18 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.870 11:13:18 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.870 --rc genhtml_branch_coverage=1 00:05:51.870 --rc genhtml_function_coverage=1 00:05:51.870 --rc genhtml_legend=1 00:05:51.870 --rc geninfo_all_blocks=1 00:05:51.870 --rc geninfo_unexecuted_blocks=1 00:05:51.870 00:05:51.870 ' 00:05:51.870 11:13:18 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.870 --rc genhtml_branch_coverage=1 00:05:51.870 --rc genhtml_function_coverage=1 00:05:51.870 --rc genhtml_legend=1 00:05:51.870 --rc geninfo_all_blocks=1 00:05:51.870 --rc geninfo_unexecuted_blocks=1 00:05:51.870 00:05:51.870 ' 00:05:51.870 11:13:18 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.870 --rc genhtml_branch_coverage=1 00:05:51.870 --rc genhtml_function_coverage=1 00:05:51.870 --rc genhtml_legend=1 00:05:51.870 --rc geninfo_all_blocks=1 00:05:51.870 --rc geninfo_unexecuted_blocks=1 00:05:51.870 00:05:51.870 ' 00:05:51.870 11:13:18 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.870 --rc genhtml_branch_coverage=1 00:05:51.870 --rc genhtml_function_coverage=1 00:05:51.870 --rc genhtml_legend=1 00:05:51.870 --rc geninfo_all_blocks=1 00:05:51.870 --rc geninfo_unexecuted_blocks=1 00:05:51.870 00:05:51.870 ' 00:05:51.870 11:13:18 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:51.870 11:13:18 -- nvmf/common.sh@7 -- # uname -s 00:05:51.870 11:13:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.870 11:13:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.870 11:13:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.870 11:13:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.870 11:13:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.870 11:13:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.870 11:13:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.870 11:13:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.870 11:13:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.870 11:13:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.870 11:13:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d373939e-959a-48c7-a724-02880d24a783 00:05:51.870 11:13:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=d373939e-959a-48c7-a724-02880d24a783 00:05:51.870 11:13:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.870 11:13:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.870 11:13:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.870 11:13:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.870 11:13:18 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:51.870 11:13:18 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.870 11:13:18 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.870 11:13:18 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.870 11:13:18 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.870 11:13:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.870 11:13:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.870 11:13:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.870 11:13:18 -- paths/export.sh@5 -- # export PATH 00:05:51.870 11:13:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.870 11:13:18 -- nvmf/common.sh@51 -- # : 0 00:05:51.870 11:13:18 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.870 11:13:18 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.870 11:13:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.870 11:13:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.870 11:13:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.870 11:13:18 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.870 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.870 11:13:18 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.870 11:13:18 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.870 11:13:18 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.870 11:13:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:51.870 11:13:18 -- spdk/autotest.sh@32 -- # uname -s 00:05:51.870 11:13:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:51.870 11:13:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:51.870 11:13:18 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:51.870 11:13:18 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:51.870 11:13:18 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:51.870 11:13:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:51.870 11:13:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:51.870 11:13:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:51.870 11:13:18 -- spdk/autotest.sh@48 -- # udevadm_pid=54848 00:05:51.870 11:13:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:51.870 11:13:18 -- pm/common@17 -- # local monitor 00:05:51.870 11:13:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:51.870 11:13:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:51.870 11:13:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:51.870 11:13:18 -- pm/common@25 -- # sleep 1 00:05:51.870 11:13:18 -- pm/common@21 -- # date +%s 00:05:51.870 11:13:18 -- pm/common@21 -- # date +%s 00:05:51.870 11:13:18 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733829198 00:05:51.870 11:13:18 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733829198 00:05:51.870 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733829198_collect-cpu-load.pm.log 00:05:51.870 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733829198_collect-vmstat.pm.log 00:05:52.878 11:13:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:52.878 11:13:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:52.878 11:13:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.878 11:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:52.878 11:13:19 -- spdk/autotest.sh@59 -- # create_test_list 00:05:52.878 11:13:19 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:52.878 11:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:52.878 11:13:19 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:52.878 11:13:19 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:52.878 11:13:19 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:52.878 11:13:19 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:52.878 11:13:19 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:52.878 11:13:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:52.878 11:13:19 -- common/autotest_common.sh@1457 -- # uname 00:05:52.878 11:13:19 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:52.878 11:13:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:52.878 11:13:19 -- common/autotest_common.sh@1477 -- # uname 00:05:52.878 11:13:19 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:52.878 11:13:19 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:52.878 11:13:19 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:53.137 lcov: LCOV version 1.15 00:05:53.137 11:13:20 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:11.226 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:11.226 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:26.107 11:13:52 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:26.107 11:13:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.107 11:13:52 -- common/autotest_common.sh@10 -- # set +x 00:06:26.107 11:13:52 -- spdk/autotest.sh@78 -- # rm -f 00:06:26.107 11:13:52 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:26.107 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:26.366 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:26.624 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:26.624 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:06:26.624 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:06:26.624 11:13:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:26.624 11:13:53 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:26.624 11:13:53 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:26.624 11:13:53 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:26.624 11:13:53 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:26.624 11:13:53 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:26.624 11:13:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:26.624 11:13:53 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:26.624 11:13:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:26.624 11:13:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:26.624 11:13:53 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:26.624 11:13:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:26.624 11:13:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:26.624 11:13:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:26.624 11:13:53 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:26.624 11:13:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:26.624 11:13:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:26.624 11:13:53 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:26.624 11:13:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:26.624 11:13:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:26.624 11:13:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:26.624 11:13:53 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:26.624 11:13:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:26.624 11:13:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:26.624 11:13:53 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:26.624 11:13:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:26.624 11:13:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:26.624 11:13:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:26.624 11:13:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:26.625 11:13:53 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:26.625 11:13:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:26.625 11:13:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:26.625 11:13:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:26.625 11:13:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:26.625 11:13:53 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:26.625 11:13:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:26.625 11:13:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:26.625 11:13:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:26.625 11:13:53 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:26.625 11:13:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:26.625 11:13:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:26.625 11:13:53 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:26.625 11:13:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:26.625 11:13:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:26.625 11:13:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:26.625 11:13:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:26.625 11:13:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:26.625 11:13:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:26.625 11:13:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:26.625 11:13:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:26.625 No valid GPT data, bailing 00:06:26.625 11:13:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:26.625 11:13:53 -- scripts/common.sh@394 -- # pt= 00:06:26.625 11:13:53 -- scripts/common.sh@395 -- # return 1 00:06:26.625 11:13:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:26.625 1+0 records in 00:06:26.625 1+0 records out 00:06:26.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182283 s, 57.5 MB/s 00:06:26.625 11:13:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:26.625 11:13:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:26.625 11:13:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:26.625 11:13:53 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:26.625 11:13:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:26.883 No valid GPT data, bailing 00:06:26.884 11:13:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:26.884 11:13:53 -- scripts/common.sh@394 -- # pt= 00:06:26.884 11:13:53 -- scripts/common.sh@395 -- # return 1 00:06:26.884 11:13:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:26.884 1+0 records in 00:06:26.884 1+0 records out 00:06:26.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491448 s, 213 MB/s 00:06:26.884 11:13:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:26.884 11:13:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:26.884 11:13:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:06:26.884 11:13:53 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:06:26.884 11:13:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:06:26.884 No valid GPT data, bailing 00:06:26.884 11:13:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:26.884 11:13:53 -- scripts/common.sh@394 -- # pt= 00:06:26.884 11:13:53 -- scripts/common.sh@395 -- # return 1 00:06:26.884 11:13:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:06:26.884 1+0 records in 00:06:26.884 1+0 records out 00:06:26.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00632778 s, 166 MB/s 00:06:26.884 11:13:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:26.884 11:13:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:26.884 11:13:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:06:26.884 11:13:53 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:06:26.884 11:13:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:06:26.884 No valid GPT data, bailing 00:06:26.884 11:13:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:06:26.884 11:13:53 -- scripts/common.sh@394 -- # pt= 00:06:26.884 11:13:53 -- scripts/common.sh@395 -- # return 1 00:06:26.884 11:13:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:06:26.884 1+0 records in 00:06:26.884 1+0 records out 00:06:26.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621416 s, 169 MB/s 00:06:26.884 11:13:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:26.884 11:13:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:26.884 11:13:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:06:26.884 11:13:53 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:06:26.884 11:13:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:06:27.143 No valid GPT data, bailing 00:06:27.143 11:13:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:06:27.143 11:13:54 -- scripts/common.sh@394 -- # pt= 00:06:27.143 11:13:54 -- scripts/common.sh@395 -- # return 1 00:06:27.143 11:13:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:06:27.143 1+0 records in 00:06:27.143 1+0 records out 00:06:27.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00546802 s, 192 MB/s 00:06:27.143 11:13:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:27.143 11:13:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:27.143 11:13:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:06:27.143 11:13:54 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:06:27.143 11:13:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:06:27.143 No valid GPT data, bailing 00:06:27.143 11:13:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:27.143 11:13:54 -- scripts/common.sh@394 -- # pt= 00:06:27.143 11:13:54 -- scripts/common.sh@395 -- # return 1 00:06:27.143 11:13:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:06:27.143 1+0 records in 00:06:27.143 1+0 records out 00:06:27.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00619429 s, 169 MB/s 00:06:27.143 11:13:54 -- spdk/autotest.sh@105 -- # sync 00:06:27.143 11:13:54 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:27.143 11:13:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:27.143 11:13:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:30.433 11:13:57 -- spdk/autotest.sh@111 -- # uname -s 00:06:30.433 11:13:57 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:30.433 11:13:57 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:30.433 11:13:57 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:31.001 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:31.570 Hugepages 00:06:31.570 node hugesize free / total 00:06:31.570 node0 1048576kB 0 / 0 00:06:31.570 node0 2048kB 0 / 0 00:06:31.570 00:06:31.570 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:31.829 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:31.829 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:32.089 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:32.089 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:32.349 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:32.349 11:13:59 -- spdk/autotest.sh@117 -- # uname -s 00:06:32.349 11:13:59 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:32.349 11:13:59 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:32.349 11:13:59 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:32.917 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:33.854 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:33.854 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:33.854 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:33.854 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:33.854 11:14:00 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:34.792 11:14:01 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:34.792 11:14:01 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:34.792 11:14:01 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:34.792 11:14:01 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:34.792 11:14:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:34.792 11:14:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:34.792 11:14:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:34.792 11:14:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:34.792 11:14:01 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:35.051 11:14:02 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:35.051 11:14:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:35.051 11:14:02 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:35.619 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:35.878 Waiting for block devices as requested 00:06:35.878 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.878 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.156 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.156 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:41.425 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:41.425 11:14:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:41.425 11:14:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:41.425 11:14:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:41.425 11:14:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:41.425 11:14:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:41.425 11:14:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:41.425 11:14:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:41.425 11:14:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:41.425 11:14:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:41.425 11:14:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:41.425 11:14:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:41.425 11:14:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:41.425 11:14:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:41.425 11:14:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:41.425 11:14:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:41.425 11:14:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:41.425 11:14:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:41.425 11:14:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:41.425 11:14:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:41.425 11:14:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:41.425 11:14:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:41.425 11:14:08 -- common/autotest_common.sh@1543 -- # continue 00:06:41.425 11:14:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:41.425 11:14:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:41.425 11:14:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:41.425 11:14:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:41.425 11:14:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:41.425 11:14:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:41.425 11:14:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:41.425 11:14:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:41.425 11:14:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:41.425 11:14:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:41.426 11:14:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:41.426 11:14:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:41.426 11:14:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:41.426 11:14:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:41.426 11:14:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:41.426 11:14:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:41.426 11:14:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:41.426 11:14:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:41.426 11:14:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:41.426 11:14:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:41.426 11:14:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:41.426 11:14:08 -- common/autotest_common.sh@1543 -- # continue 00:06:41.426 11:14:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:41.426 11:14:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:41.426 11:14:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:06:41.426 11:14:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:41.426 11:14:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:41.426 11:14:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:41.426 11:14:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:41.426 11:14:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:06:41.426 11:14:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:06:41.426 11:14:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:06:41.426 11:14:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:41.426 11:14:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:06:41.426 11:14:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:41.426 11:14:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:41.426 11:14:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:41.426 11:14:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:41.426 11:14:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:41.426 11:14:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:06:41.426 11:14:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:41.426 11:14:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:41.426 11:14:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:41.426 11:14:08 -- common/autotest_common.sh@1543 -- # continue 00:06:41.426 11:14:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:41.426 11:14:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:41.426 11:14:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:41.426 11:14:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:06:41.426 11:14:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:41.426 11:14:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:41.426 11:14:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:41.426 11:14:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:06:41.426 11:14:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:06:41.426 11:14:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:06:41.426 11:14:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:06:41.426 11:14:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:41.426 11:14:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:41.426 11:14:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:41.426 11:14:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:41.426 11:14:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:41.426 11:14:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:41.426 11:14:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:06:41.426 11:14:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:41.426 11:14:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:41.426 11:14:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:41.426 11:14:08 -- common/autotest_common.sh@1543 -- # continue 00:06:41.426 11:14:08 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:41.426 11:14:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:41.426 11:14:08 -- common/autotest_common.sh@10 -- # set +x 00:06:41.685 11:14:08 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:41.685 11:14:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:41.685 11:14:08 -- common/autotest_common.sh@10 -- # set +x 00:06:41.685 11:14:08 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:42.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.186 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.186 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.186 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.186 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.186 11:14:10 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:43.186 11:14:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.186 11:14:10 -- common/autotest_common.sh@10 -- # set +x 00:06:43.186 11:14:10 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:43.186 11:14:10 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:43.186 11:14:10 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:43.186 11:14:10 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:43.186 11:14:10 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:43.186 11:14:10 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:43.186 11:14:10 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:43.186 11:14:10 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:43.186 11:14:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:43.186 11:14:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:43.186 11:14:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:43.186 11:14:10 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:43.186 11:14:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:43.444 11:14:10 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:43.444 11:14:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:43.444 11:14:10 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:43.444 11:14:10 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:43.444 11:14:10 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:43.444 11:14:10 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:43.444 11:14:10 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:43.444 11:14:10 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:43.444 11:14:10 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:43.444 11:14:10 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:43.444 11:14:10 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:43.444 11:14:10 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:43.444 11:14:10 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:43.444 11:14:10 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:43.444 11:14:10 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:43.444 11:14:10 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:43.444 11:14:10 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:43.444 11:14:10 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:43.444 11:14:10 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:43.444 11:14:10 -- common/autotest_common.sh@1572 -- # return 0 00:06:43.444 11:14:10 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:43.444 11:14:10 -- common/autotest_common.sh@1580 -- # return 0 00:06:43.444 11:14:10 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:43.444 11:14:10 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:43.444 11:14:10 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:43.444 11:14:10 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:43.444 11:14:10 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:43.444 11:14:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.444 11:14:10 -- common/autotest_common.sh@10 -- # set +x 00:06:43.444 11:14:10 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:43.444 11:14:10 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:43.444 11:14:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.444 11:14:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.444 11:14:10 -- common/autotest_common.sh@10 -- # set +x 00:06:43.444 ************************************ 00:06:43.444 START TEST env 00:06:43.444 ************************************ 00:06:43.444 11:14:10 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:43.444 * Looking for test storage... 00:06:43.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:43.444 11:14:10 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.444 11:14:10 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.444 11:14:10 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:43.703 11:14:10 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:43.703 11:14:10 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.703 11:14:10 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.703 11:14:10 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.703 11:14:10 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.703 11:14:10 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.703 11:14:10 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.703 11:14:10 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.703 11:14:10 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.703 11:14:10 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.703 11:14:10 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.703 11:14:10 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.703 11:14:10 env -- scripts/common.sh@344 -- # case "$op" in 00:06:43.703 11:14:10 env -- scripts/common.sh@345 -- # : 1 00:06:43.703 11:14:10 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.703 11:14:10 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.703 11:14:10 env -- scripts/common.sh@365 -- # decimal 1 00:06:43.703 11:14:10 env -- scripts/common.sh@353 -- # local d=1 00:06:43.703 11:14:10 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.703 11:14:10 env -- scripts/common.sh@355 -- # echo 1 00:06:43.703 11:14:10 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.703 11:14:10 env -- scripts/common.sh@366 -- # decimal 2 00:06:43.703 11:14:10 env -- scripts/common.sh@353 -- # local d=2 00:06:43.703 11:14:10 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.703 11:14:10 env -- scripts/common.sh@355 -- # echo 2 00:06:43.703 11:14:10 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.703 11:14:10 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.703 11:14:10 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.703 11:14:10 env -- scripts/common.sh@368 -- # return 0 00:06:43.703 11:14:10 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.703 11:14:10 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:43.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.703 --rc genhtml_branch_coverage=1 00:06:43.703 --rc genhtml_function_coverage=1 00:06:43.703 --rc genhtml_legend=1 00:06:43.703 --rc geninfo_all_blocks=1 00:06:43.703 --rc geninfo_unexecuted_blocks=1 00:06:43.703 00:06:43.703 ' 00:06:43.703 11:14:10 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:43.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.703 --rc genhtml_branch_coverage=1 00:06:43.703 --rc genhtml_function_coverage=1 00:06:43.703 --rc genhtml_legend=1 00:06:43.703 --rc geninfo_all_blocks=1 00:06:43.703 --rc geninfo_unexecuted_blocks=1 00:06:43.703 00:06:43.703 ' 00:06:43.703 11:14:10 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:43.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.703 --rc genhtml_branch_coverage=1 00:06:43.703 --rc genhtml_function_coverage=1 00:06:43.703 --rc genhtml_legend=1 00:06:43.703 --rc geninfo_all_blocks=1 00:06:43.703 --rc geninfo_unexecuted_blocks=1 00:06:43.703 00:06:43.703 ' 00:06:43.703 11:14:10 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:43.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.703 --rc genhtml_branch_coverage=1 00:06:43.703 --rc genhtml_function_coverage=1 00:06:43.703 --rc genhtml_legend=1 00:06:43.703 --rc geninfo_all_blocks=1 00:06:43.703 --rc geninfo_unexecuted_blocks=1 00:06:43.703 00:06:43.703 ' 00:06:43.703 11:14:10 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:43.703 11:14:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.703 11:14:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.703 11:14:10 env -- common/autotest_common.sh@10 -- # set +x 00:06:43.703 ************************************ 00:06:43.703 START TEST env_memory 00:06:43.703 ************************************ 00:06:43.703 11:14:10 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:43.703 00:06:43.703 00:06:43.703 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.703 http://cunit.sourceforge.net/ 00:06:43.703 00:06:43.703 00:06:43.703 Suite: memory 00:06:43.703 Test: alloc and free memory map ...[2024-12-10 11:14:10.714350] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:43.703 passed 00:06:43.703 Test: mem map translation ...[2024-12-10 11:14:10.759384] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:43.703 [2024-12-10 11:14:10.759437] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:43.703 [2024-12-10 11:14:10.759504] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:43.703 [2024-12-10 11:14:10.759528] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:43.961 passed 00:06:43.961 Test: mem map registration ...[2024-12-10 11:14:10.827404] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:43.961 [2024-12-10 11:14:10.827445] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:43.961 passed 00:06:43.961 Test: mem map adjacent registrations ...passed 00:06:43.961 00:06:43.961 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.961 suites 1 1 n/a 0 0 00:06:43.961 tests 4 4 4 0 0 00:06:43.961 asserts 152 152 152 0 n/a 00:06:43.961 00:06:43.961 Elapsed time = 0.243 seconds 00:06:43.961 00:06:43.961 real 0m0.297s 00:06:43.961 user 0m0.253s 00:06:43.961 sys 0m0.035s 00:06:43.961 11:14:10 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.961 11:14:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:43.961 ************************************ 00:06:43.961 END TEST env_memory 00:06:43.961 ************************************ 00:06:43.961 11:14:11 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:43.961 11:14:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.961 11:14:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.961 11:14:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:43.961 ************************************ 00:06:43.961 START TEST env_vtophys 00:06:43.961 ************************************ 00:06:43.961 11:14:11 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:43.961 EAL: lib.eal log level changed from notice to debug 00:06:43.961 EAL: Detected lcore 0 as core 0 on socket 0 00:06:43.961 EAL: Detected lcore 1 as core 0 on socket 0 00:06:43.961 EAL: Detected lcore 2 as core 0 on socket 0 00:06:43.961 EAL: Detected lcore 3 as core 0 on socket 0 00:06:43.961 EAL: Detected lcore 4 as core 0 on socket 0 00:06:43.961 EAL: Detected lcore 5 as core 0 on socket 0 00:06:43.961 EAL: Detected lcore 6 as core 0 on socket 0 00:06:43.961 EAL: Detected lcore 7 as core 0 on socket 0 00:06:43.961 EAL: Detected lcore 8 as core 0 on socket 0 00:06:43.961 EAL: Detected lcore 9 as core 0 on socket 0 00:06:44.220 EAL: Maximum logical cores by configuration: 128 00:06:44.220 EAL: Detected CPU lcores: 10 00:06:44.220 EAL: Detected NUMA nodes: 1 00:06:44.220 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:44.220 EAL: Detected shared linkage of DPDK 00:06:44.220 EAL: No shared files mode enabled, IPC will be disabled 00:06:44.220 EAL: Selected IOVA mode 'PA' 00:06:44.220 EAL: Probing VFIO support... 00:06:44.220 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:44.220 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:44.220 EAL: Ask a virtual area of 0x2e000 bytes 00:06:44.220 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:44.220 EAL: Setting up physically contiguous memory... 00:06:44.220 EAL: Setting maximum number of open files to 524288 00:06:44.220 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:44.220 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:44.220 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.220 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:44.220 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.220 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.220 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:44.220 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:44.220 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.220 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:44.220 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.220 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.220 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:44.220 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:44.220 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.220 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:44.220 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.220 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.220 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:44.220 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:44.220 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.220 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:44.220 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.220 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.220 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:44.220 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:44.220 EAL: Hugepages will be freed exactly as allocated. 00:06:44.220 EAL: No shared files mode enabled, IPC is disabled 00:06:44.220 EAL: No shared files mode enabled, IPC is disabled 00:06:44.220 EAL: TSC frequency is ~2490000 KHz 00:06:44.220 EAL: Main lcore 0 is ready (tid=7fb110877a40;cpuset=[0]) 00:06:44.220 EAL: Trying to obtain current memory policy. 00:06:44.220 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.220 EAL: Restoring previous memory policy: 0 00:06:44.220 EAL: request: mp_malloc_sync 00:06:44.220 EAL: No shared files mode enabled, IPC is disabled 00:06:44.220 EAL: Heap on socket 0 was expanded by 2MB 00:06:44.220 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:44.220 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:44.220 EAL: Mem event callback 'spdk:(nil)' registered 00:06:44.220 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:44.220 00:06:44.220 00:06:44.220 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.220 http://cunit.sourceforge.net/ 00:06:44.220 00:06:44.220 00:06:44.220 Suite: components_suite 00:06:44.787 Test: vtophys_malloc_test ...passed 00:06:44.787 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:44.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.787 EAL: Restoring previous memory policy: 4 00:06:44.787 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.787 EAL: request: mp_malloc_sync 00:06:44.787 EAL: No shared files mode enabled, IPC is disabled 00:06:44.787 EAL: Heap on socket 0 was expanded by 4MB 00:06:44.787 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.787 EAL: request: mp_malloc_sync 00:06:44.787 EAL: No shared files mode enabled, IPC is disabled 00:06:44.787 EAL: Heap on socket 0 was shrunk by 4MB 00:06:44.787 EAL: Trying to obtain current memory policy. 00:06:44.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.787 EAL: Restoring previous memory policy: 4 00:06:44.787 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.787 EAL: request: mp_malloc_sync 00:06:44.787 EAL: No shared files mode enabled, IPC is disabled 00:06:44.787 EAL: Heap on socket 0 was expanded by 6MB 00:06:44.787 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.787 EAL: request: mp_malloc_sync 00:06:44.787 EAL: No shared files mode enabled, IPC is disabled 00:06:44.787 EAL: Heap on socket 0 was shrunk by 6MB 00:06:44.787 EAL: Trying to obtain current memory policy. 00:06:44.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.787 EAL: Restoring previous memory policy: 4 00:06:44.787 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.787 EAL: request: mp_malloc_sync 00:06:44.787 EAL: No shared files mode enabled, IPC is disabled 00:06:44.787 EAL: Heap on socket 0 was expanded by 10MB 00:06:44.787 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.787 EAL: request: mp_malloc_sync 00:06:44.787 EAL: No shared files mode enabled, IPC is disabled 00:06:44.787 EAL: Heap on socket 0 was shrunk by 10MB 00:06:44.787 EAL: Trying to obtain current memory policy. 00:06:44.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.787 EAL: Restoring previous memory policy: 4 00:06:44.787 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.787 EAL: request: mp_malloc_sync 00:06:44.787 EAL: No shared files mode enabled, IPC is disabled 00:06:44.787 EAL: Heap on socket 0 was expanded by 18MB 00:06:44.787 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.787 EAL: request: mp_malloc_sync 00:06:44.787 EAL: No shared files mode enabled, IPC is disabled 00:06:44.787 EAL: Heap on socket 0 was shrunk by 18MB 00:06:44.787 EAL: Trying to obtain current memory policy. 00:06:44.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.787 EAL: Restoring previous memory policy: 4 00:06:44.787 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.787 EAL: request: mp_malloc_sync 00:06:44.787 EAL: No shared files mode enabled, IPC is disabled 00:06:44.787 EAL: Heap on socket 0 was expanded by 34MB 00:06:44.787 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.787 EAL: request: mp_malloc_sync 00:06:44.787 EAL: No shared files mode enabled, IPC is disabled 00:06:44.787 EAL: Heap on socket 0 was shrunk by 34MB 00:06:45.045 EAL: Trying to obtain current memory policy. 00:06:45.045 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:45.045 EAL: Restoring previous memory policy: 4 00:06:45.045 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.045 EAL: request: mp_malloc_sync 00:06:45.045 EAL: No shared files mode enabled, IPC is disabled 00:06:45.045 EAL: Heap on socket 0 was expanded by 66MB 00:06:45.045 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.045 EAL: request: mp_malloc_sync 00:06:45.045 EAL: No shared files mode enabled, IPC is disabled 00:06:45.045 EAL: Heap on socket 0 was shrunk by 66MB 00:06:45.303 EAL: Trying to obtain current memory policy. 00:06:45.303 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:45.303 EAL: Restoring previous memory policy: 4 00:06:45.303 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.303 EAL: request: mp_malloc_sync 00:06:45.304 EAL: No shared files mode enabled, IPC is disabled 00:06:45.304 EAL: Heap on socket 0 was expanded by 130MB 00:06:45.304 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.562 EAL: request: mp_malloc_sync 00:06:45.562 EAL: No shared files mode enabled, IPC is disabled 00:06:45.562 EAL: Heap on socket 0 was shrunk by 130MB 00:06:45.562 EAL: Trying to obtain current memory policy. 00:06:45.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:45.821 EAL: Restoring previous memory policy: 4 00:06:45.821 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.821 EAL: request: mp_malloc_sync 00:06:45.821 EAL: No shared files mode enabled, IPC is disabled 00:06:45.821 EAL: Heap on socket 0 was expanded by 258MB 00:06:46.080 EAL: Calling mem event callback 'spdk:(nil)' 00:06:46.338 EAL: request: mp_malloc_sync 00:06:46.338 EAL: No shared files mode enabled, IPC is disabled 00:06:46.339 EAL: Heap on socket 0 was shrunk by 258MB 00:06:46.597 EAL: Trying to obtain current memory policy. 00:06:46.597 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:46.856 EAL: Restoring previous memory policy: 4 00:06:46.856 EAL: Calling mem event callback 'spdk:(nil)' 00:06:46.856 EAL: request: mp_malloc_sync 00:06:46.856 EAL: No shared files mode enabled, IPC is disabled 00:06:46.856 EAL: Heap on socket 0 was expanded by 514MB 00:06:47.793 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.793 EAL: request: mp_malloc_sync 00:06:47.793 EAL: No shared files mode enabled, IPC is disabled 00:06:47.793 EAL: Heap on socket 0 was shrunk by 514MB 00:06:48.728 EAL: Trying to obtain current memory policy. 00:06:48.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:48.728 EAL: Restoring previous memory policy: 4 00:06:48.728 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.728 EAL: request: mp_malloc_sync 00:06:48.728 EAL: No shared files mode enabled, IPC is disabled 00:06:48.728 EAL: Heap on socket 0 was expanded by 1026MB 00:06:50.633 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.633 EAL: request: mp_malloc_sync 00:06:50.633 EAL: No shared files mode enabled, IPC is disabled 00:06:50.633 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:52.554 passed 00:06:52.554 00:06:52.554 Run Summary: Type Total Ran Passed Failed Inactive 00:06:52.554 suites 1 1 n/a 0 0 00:06:52.554 tests 2 2 2 0 0 00:06:52.554 asserts 5768 5768 5768 0 n/a 00:06:52.554 00:06:52.554 Elapsed time = 8.132 seconds 00:06:52.554 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.554 EAL: request: mp_malloc_sync 00:06:52.554 EAL: No shared files mode enabled, IPC is disabled 00:06:52.554 EAL: Heap on socket 0 was shrunk by 2MB 00:06:52.554 EAL: No shared files mode enabled, IPC is disabled 00:06:52.554 EAL: No shared files mode enabled, IPC is disabled 00:06:52.554 EAL: No shared files mode enabled, IPC is disabled 00:06:52.554 00:06:52.554 real 0m8.475s 00:06:52.554 user 0m7.461s 00:06:52.554 sys 0m0.854s 00:06:52.554 11:14:19 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.554 11:14:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:52.554 ************************************ 00:06:52.554 END TEST env_vtophys 00:06:52.554 ************************************ 00:06:52.554 11:14:19 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:52.554 11:14:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.554 11:14:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.554 11:14:19 env -- common/autotest_common.sh@10 -- # set +x 00:06:52.554 ************************************ 00:06:52.554 START TEST env_pci 00:06:52.554 ************************************ 00:06:52.554 11:14:19 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:52.554 00:06:52.554 00:06:52.554 CUnit - A unit testing framework for C - Version 2.1-3 00:06:52.554 http://cunit.sourceforge.net/ 00:06:52.554 00:06:52.554 00:06:52.554 Suite: pci 00:06:52.554 Test: pci_hook ...[2024-12-10 11:14:19.606590] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57735 has claimed it 00:06:52.554 passed 00:06:52.554 00:06:52.554 Run Summary: Type Total Ran Passed Failed Inactive 00:06:52.554 suites 1 1 n/a 0 0 00:06:52.554 tests 1 1 1 0 0 00:06:52.554 asserts 25 25 25 0 n/a 00:06:52.554 00:06:52.554 Elapsed time = 0.008 seconds 00:06:52.554 EAL: Cannot find device (10000:00:01.0) 00:06:52.554 EAL: Failed to attach device on primary process 00:06:52.813 00:06:52.813 real 0m0.116s 00:06:52.813 user 0m0.048s 00:06:52.813 sys 0m0.066s 00:06:52.813 11:14:19 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.813 11:14:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:52.813 ************************************ 00:06:52.813 END TEST env_pci 00:06:52.813 ************************************ 00:06:52.813 11:14:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:52.813 11:14:19 env -- env/env.sh@15 -- # uname 00:06:52.813 11:14:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:52.813 11:14:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:52.813 11:14:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:52.813 11:14:19 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:52.813 11:14:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.813 11:14:19 env -- common/autotest_common.sh@10 -- # set +x 00:06:52.813 ************************************ 00:06:52.813 START TEST env_dpdk_post_init 00:06:52.813 ************************************ 00:06:52.813 11:14:19 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:52.813 EAL: Detected CPU lcores: 10 00:06:52.813 EAL: Detected NUMA nodes: 1 00:06:52.813 EAL: Detected shared linkage of DPDK 00:06:52.813 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:52.813 EAL: Selected IOVA mode 'PA' 00:06:53.072 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:53.072 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:53.072 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:53.072 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:53.072 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:53.072 Starting DPDK initialization... 00:06:53.072 Starting SPDK post initialization... 00:06:53.072 SPDK NVMe probe 00:06:53.072 Attaching to 0000:00:10.0 00:06:53.072 Attaching to 0000:00:11.0 00:06:53.072 Attaching to 0000:00:12.0 00:06:53.072 Attaching to 0000:00:13.0 00:06:53.072 Attached to 0000:00:10.0 00:06:53.072 Attached to 0000:00:11.0 00:06:53.072 Attached to 0000:00:13.0 00:06:53.072 Attached to 0000:00:12.0 00:06:53.072 Cleaning up... 00:06:53.072 00:06:53.072 real 0m0.313s 00:06:53.072 user 0m0.096s 00:06:53.072 sys 0m0.120s 00:06:53.072 11:14:20 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.072 11:14:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:53.072 ************************************ 00:06:53.072 END TEST env_dpdk_post_init 00:06:53.072 ************************************ 00:06:53.072 11:14:20 env -- env/env.sh@26 -- # uname 00:06:53.072 11:14:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:53.072 11:14:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:53.072 11:14:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.072 11:14:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.072 11:14:20 env -- common/autotest_common.sh@10 -- # set +x 00:06:53.072 ************************************ 00:06:53.072 START TEST env_mem_callbacks 00:06:53.072 ************************************ 00:06:53.072 11:14:20 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:53.331 EAL: Detected CPU lcores: 10 00:06:53.331 EAL: Detected NUMA nodes: 1 00:06:53.331 EAL: Detected shared linkage of DPDK 00:06:53.331 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:53.331 EAL: Selected IOVA mode 'PA' 00:06:53.331 00:06:53.331 00:06:53.331 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.331 http://cunit.sourceforge.net/ 00:06:53.331 00:06:53.331 00:06:53.331 Suite: memory 00:06:53.331 Test: test ... 00:06:53.331 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:53.331 register 0x200000200000 2097152 00:06:53.331 malloc 3145728 00:06:53.331 register 0x200000400000 4194304 00:06:53.331 buf 0x2000004fffc0 len 3145728 PASSED 00:06:53.331 malloc 64 00:06:53.331 buf 0x2000004ffec0 len 64 PASSED 00:06:53.331 malloc 4194304 00:06:53.331 register 0x200000800000 6291456 00:06:53.331 buf 0x2000009fffc0 len 4194304 PASSED 00:06:53.331 free 0x2000004fffc0 3145728 00:06:53.331 free 0x2000004ffec0 64 00:06:53.331 unregister 0x200000400000 4194304 PASSED 00:06:53.331 free 0x2000009fffc0 4194304 00:06:53.331 unregister 0x200000800000 6291456 PASSED 00:06:53.331 malloc 8388608 00:06:53.331 register 0x200000400000 10485760 00:06:53.331 buf 0x2000005fffc0 len 8388608 PASSED 00:06:53.331 free 0x2000005fffc0 8388608 00:06:53.331 unregister 0x200000400000 10485760 PASSED 00:06:53.331 passed 00:06:53.331 00:06:53.331 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.331 suites 1 1 n/a 0 0 00:06:53.331 tests 1 1 1 0 0 00:06:53.331 asserts 15 15 15 0 n/a 00:06:53.332 00:06:53.332 Elapsed time = 0.075 seconds 00:06:53.332 00:06:53.332 real 0m0.281s 00:06:53.332 user 0m0.106s 00:06:53.332 sys 0m0.073s 00:06:53.332 11:14:20 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.332 11:14:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:53.332 ************************************ 00:06:53.332 END TEST env_mem_callbacks 00:06:53.332 ************************************ 00:06:53.590 00:06:53.590 real 0m10.077s 00:06:53.590 user 0m8.205s 00:06:53.590 sys 0m1.513s 00:06:53.590 11:14:20 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.590 11:14:20 env -- common/autotest_common.sh@10 -- # set +x 00:06:53.590 ************************************ 00:06:53.590 END TEST env 00:06:53.590 ************************************ 00:06:53.590 11:14:20 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:53.590 11:14:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.591 11:14:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.591 11:14:20 -- common/autotest_common.sh@10 -- # set +x 00:06:53.591 ************************************ 00:06:53.591 START TEST rpc 00:06:53.591 ************************************ 00:06:53.591 11:14:20 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:53.591 * Looking for test storage... 00:06:53.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:53.591 11:14:20 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.591 11:14:20 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.591 11:14:20 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.850 11:14:20 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.850 11:14:20 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.850 11:14:20 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.850 11:14:20 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.850 11:14:20 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.850 11:14:20 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.850 11:14:20 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.850 11:14:20 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.850 11:14:20 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.850 11:14:20 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.850 11:14:20 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.850 11:14:20 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.850 11:14:20 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:53.850 11:14:20 rpc -- scripts/common.sh@345 -- # : 1 00:06:53.850 11:14:20 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.850 11:14:20 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.850 11:14:20 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:53.850 11:14:20 rpc -- scripts/common.sh@353 -- # local d=1 00:06:53.850 11:14:20 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.850 11:14:20 rpc -- scripts/common.sh@355 -- # echo 1 00:06:53.850 11:14:20 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.850 11:14:20 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:53.850 11:14:20 rpc -- scripts/common.sh@353 -- # local d=2 00:06:53.850 11:14:20 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.850 11:14:20 rpc -- scripts/common.sh@355 -- # echo 2 00:06:53.850 11:14:20 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.850 11:14:20 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.850 11:14:20 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.850 11:14:20 rpc -- scripts/common.sh@368 -- # return 0 00:06:53.850 11:14:20 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.850 11:14:20 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.850 --rc genhtml_branch_coverage=1 00:06:53.850 --rc genhtml_function_coverage=1 00:06:53.850 --rc genhtml_legend=1 00:06:53.850 --rc geninfo_all_blocks=1 00:06:53.850 --rc geninfo_unexecuted_blocks=1 00:06:53.850 00:06:53.850 ' 00:06:53.850 11:14:20 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.850 --rc genhtml_branch_coverage=1 00:06:53.850 --rc genhtml_function_coverage=1 00:06:53.850 --rc genhtml_legend=1 00:06:53.850 --rc geninfo_all_blocks=1 00:06:53.850 --rc geninfo_unexecuted_blocks=1 00:06:53.850 00:06:53.850 ' 00:06:53.850 11:14:20 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.850 --rc genhtml_branch_coverage=1 00:06:53.850 --rc genhtml_function_coverage=1 00:06:53.850 --rc genhtml_legend=1 00:06:53.850 --rc geninfo_all_blocks=1 00:06:53.850 --rc geninfo_unexecuted_blocks=1 00:06:53.850 00:06:53.850 ' 00:06:53.850 11:14:20 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.850 --rc genhtml_branch_coverage=1 00:06:53.850 --rc genhtml_function_coverage=1 00:06:53.850 --rc genhtml_legend=1 00:06:53.850 --rc geninfo_all_blocks=1 00:06:53.850 --rc geninfo_unexecuted_blocks=1 00:06:53.850 00:06:53.850 ' 00:06:53.850 11:14:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57862 00:06:53.850 11:14:20 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:53.850 11:14:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:53.850 11:14:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57862 00:06:53.850 11:14:20 rpc -- common/autotest_common.sh@835 -- # '[' -z 57862 ']' 00:06:53.850 11:14:20 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.850 11:14:20 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.850 11:14:20 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.850 11:14:20 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.850 11:14:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.850 [2024-12-10 11:14:20.891626] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:53.850 [2024-12-10 11:14:20.891753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57862 ] 00:06:54.109 [2024-12-10 11:14:21.058112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.109 [2024-12-10 11:14:21.170855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:54.109 [2024-12-10 11:14:21.170929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57862' to capture a snapshot of events at runtime. 00:06:54.109 [2024-12-10 11:14:21.170943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:54.109 [2024-12-10 11:14:21.170957] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:54.109 [2024-12-10 11:14:21.170966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57862 for offline analysis/debug. 00:06:54.109 [2024-12-10 11:14:21.172271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.047 11:14:22 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.047 11:14:22 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:55.047 11:14:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:55.047 11:14:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:55.047 11:14:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:55.047 11:14:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:55.047 11:14:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.047 11:14:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.047 11:14:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.047 ************************************ 00:06:55.047 START TEST rpc_integrity 00:06:55.047 ************************************ 00:06:55.047 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:55.047 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:55.047 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.047 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.047 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.047 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:55.047 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:55.047 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:55.047 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:55.047 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.047 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.307 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.307 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:55.307 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:55.307 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.307 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.307 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.307 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:55.307 { 00:06:55.307 "name": "Malloc0", 00:06:55.307 "aliases": [ 00:06:55.307 "19daf5a2-df35-4666-8fff-264cb9bb76f6" 00:06:55.307 ], 00:06:55.307 "product_name": "Malloc disk", 00:06:55.307 "block_size": 512, 00:06:55.307 "num_blocks": 16384, 00:06:55.307 "uuid": "19daf5a2-df35-4666-8fff-264cb9bb76f6", 00:06:55.307 "assigned_rate_limits": { 00:06:55.307 "rw_ios_per_sec": 0, 00:06:55.307 "rw_mbytes_per_sec": 0, 00:06:55.307 "r_mbytes_per_sec": 0, 00:06:55.307 "w_mbytes_per_sec": 0 00:06:55.307 }, 00:06:55.307 "claimed": false, 00:06:55.307 "zoned": false, 00:06:55.307 "supported_io_types": { 00:06:55.307 "read": true, 00:06:55.307 "write": true, 00:06:55.307 "unmap": true, 00:06:55.307 "flush": true, 00:06:55.307 "reset": true, 00:06:55.307 "nvme_admin": false, 00:06:55.307 "nvme_io": false, 00:06:55.307 "nvme_io_md": false, 00:06:55.307 "write_zeroes": true, 00:06:55.307 "zcopy": true, 00:06:55.307 "get_zone_info": false, 00:06:55.307 "zone_management": false, 00:06:55.307 "zone_append": false, 00:06:55.307 "compare": false, 00:06:55.307 "compare_and_write": false, 00:06:55.307 "abort": true, 00:06:55.307 "seek_hole": false, 00:06:55.307 "seek_data": false, 00:06:55.307 "copy": true, 00:06:55.307 "nvme_iov_md": false 00:06:55.307 }, 00:06:55.307 "memory_domains": [ 00:06:55.307 { 00:06:55.307 "dma_device_id": "system", 00:06:55.307 "dma_device_type": 1 00:06:55.307 }, 00:06:55.307 { 00:06:55.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.307 "dma_device_type": 2 00:06:55.307 } 00:06:55.307 ], 00:06:55.307 "driver_specific": {} 00:06:55.307 } 00:06:55.307 ]' 00:06:55.307 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:55.307 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:55.307 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:55.307 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.307 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.307 [2024-12-10 11:14:22.251404] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:55.307 [2024-12-10 11:14:22.251477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.307 [2024-12-10 11:14:22.251505] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:55.307 [2024-12-10 11:14:22.251519] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.307 [2024-12-10 11:14:22.254248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.307 [2024-12-10 11:14:22.254300] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:55.307 Passthru0 00:06:55.307 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.307 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:55.307 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.307 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.307 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.307 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:55.307 { 00:06:55.307 "name": "Malloc0", 00:06:55.307 "aliases": [ 00:06:55.307 "19daf5a2-df35-4666-8fff-264cb9bb76f6" 00:06:55.307 ], 00:06:55.307 "product_name": "Malloc disk", 00:06:55.307 "block_size": 512, 00:06:55.307 "num_blocks": 16384, 00:06:55.307 "uuid": "19daf5a2-df35-4666-8fff-264cb9bb76f6", 00:06:55.307 "assigned_rate_limits": { 00:06:55.307 "rw_ios_per_sec": 0, 00:06:55.307 "rw_mbytes_per_sec": 0, 00:06:55.307 "r_mbytes_per_sec": 0, 00:06:55.307 "w_mbytes_per_sec": 0 00:06:55.307 }, 00:06:55.307 "claimed": true, 00:06:55.307 "claim_type": "exclusive_write", 00:06:55.307 "zoned": false, 00:06:55.307 "supported_io_types": { 00:06:55.307 "read": true, 00:06:55.307 "write": true, 00:06:55.307 "unmap": true, 00:06:55.307 "flush": true, 00:06:55.307 "reset": true, 00:06:55.307 "nvme_admin": false, 00:06:55.307 "nvme_io": false, 00:06:55.307 "nvme_io_md": false, 00:06:55.307 "write_zeroes": true, 00:06:55.307 "zcopy": true, 00:06:55.307 "get_zone_info": false, 00:06:55.307 "zone_management": false, 00:06:55.307 "zone_append": false, 00:06:55.307 "compare": false, 00:06:55.307 "compare_and_write": false, 00:06:55.307 "abort": true, 00:06:55.307 "seek_hole": false, 00:06:55.307 "seek_data": false, 00:06:55.307 "copy": true, 00:06:55.307 "nvme_iov_md": false 00:06:55.307 }, 00:06:55.307 "memory_domains": [ 00:06:55.307 { 00:06:55.307 "dma_device_id": "system", 00:06:55.307 "dma_device_type": 1 00:06:55.307 }, 00:06:55.307 { 00:06:55.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.307 "dma_device_type": 2 00:06:55.307 } 00:06:55.307 ], 00:06:55.307 "driver_specific": {} 00:06:55.307 }, 00:06:55.307 { 00:06:55.307 "name": "Passthru0", 00:06:55.307 "aliases": [ 00:06:55.307 "eaefe071-4bcc-51ce-8c4b-0bc6e46b1e62" 00:06:55.307 ], 00:06:55.307 "product_name": "passthru", 00:06:55.307 "block_size": 512, 00:06:55.307 "num_blocks": 16384, 00:06:55.307 "uuid": "eaefe071-4bcc-51ce-8c4b-0bc6e46b1e62", 00:06:55.307 "assigned_rate_limits": { 00:06:55.307 "rw_ios_per_sec": 0, 00:06:55.307 "rw_mbytes_per_sec": 0, 00:06:55.307 "r_mbytes_per_sec": 0, 00:06:55.307 "w_mbytes_per_sec": 0 00:06:55.307 }, 00:06:55.307 "claimed": false, 00:06:55.307 "zoned": false, 00:06:55.307 "supported_io_types": { 00:06:55.307 "read": true, 00:06:55.307 "write": true, 00:06:55.307 "unmap": true, 00:06:55.307 "flush": true, 00:06:55.307 "reset": true, 00:06:55.307 "nvme_admin": false, 00:06:55.307 "nvme_io": false, 00:06:55.307 "nvme_io_md": false, 00:06:55.307 "write_zeroes": true, 00:06:55.307 "zcopy": true, 00:06:55.307 "get_zone_info": false, 00:06:55.307 "zone_management": false, 00:06:55.307 "zone_append": false, 00:06:55.307 "compare": false, 00:06:55.307 "compare_and_write": false, 00:06:55.307 "abort": true, 00:06:55.307 "seek_hole": false, 00:06:55.307 "seek_data": false, 00:06:55.307 "copy": true, 00:06:55.307 "nvme_iov_md": false 00:06:55.307 }, 00:06:55.307 "memory_domains": [ 00:06:55.307 { 00:06:55.307 "dma_device_id": "system", 00:06:55.307 "dma_device_type": 1 00:06:55.307 }, 00:06:55.307 { 00:06:55.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.307 "dma_device_type": 2 00:06:55.307 } 00:06:55.307 ], 00:06:55.307 "driver_specific": { 00:06:55.307 "passthru": { 00:06:55.307 "name": "Passthru0", 00:06:55.307 "base_bdev_name": "Malloc0" 00:06:55.307 } 00:06:55.307 } 00:06:55.307 } 00:06:55.307 ]' 00:06:55.307 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:55.307 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:55.307 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:55.307 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.307 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.307 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.307 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:55.308 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.308 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.308 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.308 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:55.308 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.308 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.308 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.308 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:55.308 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:55.567 11:14:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:55.567 00:06:55.567 real 0m0.351s 00:06:55.567 user 0m0.179s 00:06:55.567 sys 0m0.071s 00:06:55.567 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.567 11:14:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.567 ************************************ 00:06:55.567 END TEST rpc_integrity 00:06:55.567 ************************************ 00:06:55.567 11:14:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:55.567 11:14:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.567 11:14:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.567 11:14:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.567 ************************************ 00:06:55.567 START TEST rpc_plugins 00:06:55.567 ************************************ 00:06:55.567 11:14:22 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:55.567 11:14:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:55.567 11:14:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.567 11:14:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:55.567 11:14:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.567 11:14:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:55.567 11:14:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:55.567 11:14:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.567 11:14:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:55.567 11:14:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.567 11:14:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:55.567 { 00:06:55.567 "name": "Malloc1", 00:06:55.567 "aliases": [ 00:06:55.567 "4d5e3943-a865-4d14-b64c-b89b08765a34" 00:06:55.567 ], 00:06:55.567 "product_name": "Malloc disk", 00:06:55.567 "block_size": 4096, 00:06:55.567 "num_blocks": 256, 00:06:55.567 "uuid": "4d5e3943-a865-4d14-b64c-b89b08765a34", 00:06:55.567 "assigned_rate_limits": { 00:06:55.567 "rw_ios_per_sec": 0, 00:06:55.567 "rw_mbytes_per_sec": 0, 00:06:55.567 "r_mbytes_per_sec": 0, 00:06:55.567 "w_mbytes_per_sec": 0 00:06:55.567 }, 00:06:55.567 "claimed": false, 00:06:55.567 "zoned": false, 00:06:55.567 "supported_io_types": { 00:06:55.567 "read": true, 00:06:55.567 "write": true, 00:06:55.567 "unmap": true, 00:06:55.567 "flush": true, 00:06:55.567 "reset": true, 00:06:55.567 "nvme_admin": false, 00:06:55.567 "nvme_io": false, 00:06:55.567 "nvme_io_md": false, 00:06:55.567 "write_zeroes": true, 00:06:55.567 "zcopy": true, 00:06:55.567 "get_zone_info": false, 00:06:55.567 "zone_management": false, 00:06:55.567 "zone_append": false, 00:06:55.567 "compare": false, 00:06:55.567 "compare_and_write": false, 00:06:55.567 "abort": true, 00:06:55.567 "seek_hole": false, 00:06:55.567 "seek_data": false, 00:06:55.567 "copy": true, 00:06:55.567 "nvme_iov_md": false 00:06:55.567 }, 00:06:55.567 "memory_domains": [ 00:06:55.567 { 00:06:55.567 "dma_device_id": "system", 00:06:55.567 "dma_device_type": 1 00:06:55.567 }, 00:06:55.567 { 00:06:55.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.567 "dma_device_type": 2 00:06:55.567 } 00:06:55.567 ], 00:06:55.567 "driver_specific": {} 00:06:55.567 } 00:06:55.567 ]' 00:06:55.567 11:14:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:55.567 11:14:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:55.567 11:14:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:55.567 11:14:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.568 11:14:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:55.568 11:14:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.568 11:14:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:55.568 11:14:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.568 11:14:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:55.568 11:14:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.568 11:14:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:55.568 11:14:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:55.568 11:14:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:55.568 00:06:55.568 real 0m0.171s 00:06:55.568 user 0m0.095s 00:06:55.568 sys 0m0.029s 00:06:55.568 11:14:22 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.568 11:14:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:55.568 ************************************ 00:06:55.568 END TEST rpc_plugins 00:06:55.568 ************************************ 00:06:55.827 11:14:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:55.827 11:14:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.827 11:14:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.827 11:14:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.827 ************************************ 00:06:55.827 START TEST rpc_trace_cmd_test 00:06:55.827 ************************************ 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:55.827 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57862", 00:06:55.827 "tpoint_group_mask": "0x8", 00:06:55.827 "iscsi_conn": { 00:06:55.827 "mask": "0x2", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 }, 00:06:55.827 "scsi": { 00:06:55.827 "mask": "0x4", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 }, 00:06:55.827 "bdev": { 00:06:55.827 "mask": "0x8", 00:06:55.827 "tpoint_mask": "0xffffffffffffffff" 00:06:55.827 }, 00:06:55.827 "nvmf_rdma": { 00:06:55.827 "mask": "0x10", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 }, 00:06:55.827 "nvmf_tcp": { 00:06:55.827 "mask": "0x20", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 }, 00:06:55.827 "ftl": { 00:06:55.827 "mask": "0x40", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 }, 00:06:55.827 "blobfs": { 00:06:55.827 "mask": "0x80", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 }, 00:06:55.827 "dsa": { 00:06:55.827 "mask": "0x200", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 }, 00:06:55.827 "thread": { 00:06:55.827 "mask": "0x400", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 }, 00:06:55.827 "nvme_pcie": { 00:06:55.827 "mask": "0x800", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 }, 00:06:55.827 "iaa": { 00:06:55.827 "mask": "0x1000", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 }, 00:06:55.827 "nvme_tcp": { 00:06:55.827 "mask": "0x2000", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 }, 00:06:55.827 "bdev_nvme": { 00:06:55.827 "mask": "0x4000", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 }, 00:06:55.827 "sock": { 00:06:55.827 "mask": "0x8000", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 }, 00:06:55.827 "blob": { 00:06:55.827 "mask": "0x10000", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 }, 00:06:55.827 "bdev_raid": { 00:06:55.827 "mask": "0x20000", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 }, 00:06:55.827 "scheduler": { 00:06:55.827 "mask": "0x40000", 00:06:55.827 "tpoint_mask": "0x0" 00:06:55.827 } 00:06:55.827 }' 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:55.827 11:14:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:56.086 11:14:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:56.086 00:06:56.086 real 0m0.215s 00:06:56.086 user 0m0.159s 00:06:56.086 sys 0m0.046s 00:06:56.086 11:14:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.086 ************************************ 00:06:56.086 END TEST rpc_trace_cmd_test 00:06:56.086 ************************************ 00:06:56.086 11:14:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.086 11:14:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:56.086 11:14:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:56.086 11:14:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:56.086 11:14:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.086 11:14:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.086 11:14:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.086 ************************************ 00:06:56.086 START TEST rpc_daemon_integrity 00:06:56.086 ************************************ 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.086 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.087 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.087 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:56.087 { 00:06:56.087 "name": "Malloc2", 00:06:56.087 "aliases": [ 00:06:56.087 "78c6ef40-14d1-44ac-a31d-baeba12bca67" 00:06:56.087 ], 00:06:56.087 "product_name": "Malloc disk", 00:06:56.087 "block_size": 512, 00:06:56.087 "num_blocks": 16384, 00:06:56.087 "uuid": "78c6ef40-14d1-44ac-a31d-baeba12bca67", 00:06:56.087 "assigned_rate_limits": { 00:06:56.087 "rw_ios_per_sec": 0, 00:06:56.087 "rw_mbytes_per_sec": 0, 00:06:56.087 "r_mbytes_per_sec": 0, 00:06:56.087 "w_mbytes_per_sec": 0 00:06:56.087 }, 00:06:56.087 "claimed": false, 00:06:56.087 "zoned": false, 00:06:56.087 "supported_io_types": { 00:06:56.087 "read": true, 00:06:56.087 "write": true, 00:06:56.087 "unmap": true, 00:06:56.087 "flush": true, 00:06:56.087 "reset": true, 00:06:56.087 "nvme_admin": false, 00:06:56.087 "nvme_io": false, 00:06:56.087 "nvme_io_md": false, 00:06:56.087 "write_zeroes": true, 00:06:56.087 "zcopy": true, 00:06:56.087 "get_zone_info": false, 00:06:56.087 "zone_management": false, 00:06:56.087 "zone_append": false, 00:06:56.087 "compare": false, 00:06:56.087 "compare_and_write": false, 00:06:56.087 "abort": true, 00:06:56.087 "seek_hole": false, 00:06:56.087 "seek_data": false, 00:06:56.087 "copy": true, 00:06:56.087 "nvme_iov_md": false 00:06:56.087 }, 00:06:56.087 "memory_domains": [ 00:06:56.087 { 00:06:56.087 "dma_device_id": "system", 00:06:56.087 "dma_device_type": 1 00:06:56.087 }, 00:06:56.087 { 00:06:56.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.087 "dma_device_type": 2 00:06:56.087 } 00:06:56.087 ], 00:06:56.087 "driver_specific": {} 00:06:56.087 } 00:06:56.087 ]' 00:06:56.087 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:56.087 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:56.087 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:56.087 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.087 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.087 [2024-12-10 11:14:23.198090] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:56.087 [2024-12-10 11:14:23.198156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:56.087 [2024-12-10 11:14:23.198180] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:56.087 [2024-12-10 11:14:23.198193] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:56.346 [2024-12-10 11:14:23.200643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:56.346 [2024-12-10 11:14:23.200688] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:56.346 Passthru0 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:56.346 { 00:06:56.346 "name": "Malloc2", 00:06:56.346 "aliases": [ 00:06:56.346 "78c6ef40-14d1-44ac-a31d-baeba12bca67" 00:06:56.346 ], 00:06:56.346 "product_name": "Malloc disk", 00:06:56.346 "block_size": 512, 00:06:56.346 "num_blocks": 16384, 00:06:56.346 "uuid": "78c6ef40-14d1-44ac-a31d-baeba12bca67", 00:06:56.346 "assigned_rate_limits": { 00:06:56.346 "rw_ios_per_sec": 0, 00:06:56.346 "rw_mbytes_per_sec": 0, 00:06:56.346 "r_mbytes_per_sec": 0, 00:06:56.346 "w_mbytes_per_sec": 0 00:06:56.346 }, 00:06:56.346 "claimed": true, 00:06:56.346 "claim_type": "exclusive_write", 00:06:56.346 "zoned": false, 00:06:56.346 "supported_io_types": { 00:06:56.346 "read": true, 00:06:56.346 "write": true, 00:06:56.346 "unmap": true, 00:06:56.346 "flush": true, 00:06:56.346 "reset": true, 00:06:56.346 "nvme_admin": false, 00:06:56.346 "nvme_io": false, 00:06:56.346 "nvme_io_md": false, 00:06:56.346 "write_zeroes": true, 00:06:56.346 "zcopy": true, 00:06:56.346 "get_zone_info": false, 00:06:56.346 "zone_management": false, 00:06:56.346 "zone_append": false, 00:06:56.346 "compare": false, 00:06:56.346 "compare_and_write": false, 00:06:56.346 "abort": true, 00:06:56.346 "seek_hole": false, 00:06:56.346 "seek_data": false, 00:06:56.346 "copy": true, 00:06:56.346 "nvme_iov_md": false 00:06:56.346 }, 00:06:56.346 "memory_domains": [ 00:06:56.346 { 00:06:56.346 "dma_device_id": "system", 00:06:56.346 "dma_device_type": 1 00:06:56.346 }, 00:06:56.346 { 00:06:56.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.346 "dma_device_type": 2 00:06:56.346 } 00:06:56.346 ], 00:06:56.346 "driver_specific": {} 00:06:56.346 }, 00:06:56.346 { 00:06:56.346 "name": "Passthru0", 00:06:56.346 "aliases": [ 00:06:56.346 "2fcba979-4557-5eec-88b1-7b5234547258" 00:06:56.346 ], 00:06:56.346 "product_name": "passthru", 00:06:56.346 "block_size": 512, 00:06:56.346 "num_blocks": 16384, 00:06:56.346 "uuid": "2fcba979-4557-5eec-88b1-7b5234547258", 00:06:56.346 "assigned_rate_limits": { 00:06:56.346 "rw_ios_per_sec": 0, 00:06:56.346 "rw_mbytes_per_sec": 0, 00:06:56.346 "r_mbytes_per_sec": 0, 00:06:56.346 "w_mbytes_per_sec": 0 00:06:56.346 }, 00:06:56.346 "claimed": false, 00:06:56.346 "zoned": false, 00:06:56.346 "supported_io_types": { 00:06:56.346 "read": true, 00:06:56.346 "write": true, 00:06:56.346 "unmap": true, 00:06:56.346 "flush": true, 00:06:56.346 "reset": true, 00:06:56.346 "nvme_admin": false, 00:06:56.346 "nvme_io": false, 00:06:56.346 "nvme_io_md": false, 00:06:56.346 "write_zeroes": true, 00:06:56.346 "zcopy": true, 00:06:56.346 "get_zone_info": false, 00:06:56.346 "zone_management": false, 00:06:56.346 "zone_append": false, 00:06:56.346 "compare": false, 00:06:56.346 "compare_and_write": false, 00:06:56.346 "abort": true, 00:06:56.346 "seek_hole": false, 00:06:56.346 "seek_data": false, 00:06:56.346 "copy": true, 00:06:56.346 "nvme_iov_md": false 00:06:56.346 }, 00:06:56.346 "memory_domains": [ 00:06:56.346 { 00:06:56.346 "dma_device_id": "system", 00:06:56.346 "dma_device_type": 1 00:06:56.346 }, 00:06:56.346 { 00:06:56.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.346 "dma_device_type": 2 00:06:56.346 } 00:06:56.346 ], 00:06:56.346 "driver_specific": { 00:06:56.346 "passthru": { 00:06:56.346 "name": "Passthru0", 00:06:56.346 "base_bdev_name": "Malloc2" 00:06:56.346 } 00:06:56.346 } 00:06:56.346 } 00:06:56.346 ]' 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:56.346 00:06:56.346 real 0m0.368s 00:06:56.346 user 0m0.194s 00:06:56.346 sys 0m0.067s 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.346 11:14:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.346 ************************************ 00:06:56.346 END TEST rpc_daemon_integrity 00:06:56.346 ************************************ 00:06:56.346 11:14:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:56.346 11:14:23 rpc -- rpc/rpc.sh@84 -- # killprocess 57862 00:06:56.346 11:14:23 rpc -- common/autotest_common.sh@954 -- # '[' -z 57862 ']' 00:06:56.347 11:14:23 rpc -- common/autotest_common.sh@958 -- # kill -0 57862 00:06:56.347 11:14:23 rpc -- common/autotest_common.sh@959 -- # uname 00:06:56.606 11:14:23 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.606 11:14:23 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57862 00:06:56.606 11:14:23 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.606 11:14:23 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.606 killing process with pid 57862 00:06:56.606 11:14:23 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57862' 00:06:56.606 11:14:23 rpc -- common/autotest_common.sh@973 -- # kill 57862 00:06:56.606 11:14:23 rpc -- common/autotest_common.sh@978 -- # wait 57862 00:06:59.141 00:06:59.141 real 0m5.352s 00:06:59.141 user 0m5.823s 00:06:59.141 sys 0m1.025s 00:06:59.141 11:14:25 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.141 11:14:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.141 ************************************ 00:06:59.141 END TEST rpc 00:06:59.141 ************************************ 00:06:59.141 11:14:25 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:59.141 11:14:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.141 11:14:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.141 11:14:25 -- common/autotest_common.sh@10 -- # set +x 00:06:59.141 ************************************ 00:06:59.141 START TEST skip_rpc 00:06:59.141 ************************************ 00:06:59.141 11:14:25 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:59.141 * Looking for test storage... 00:06:59.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:59.141 11:14:26 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:59.141 11:14:26 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:59.141 11:14:26 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:59.141 11:14:26 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:59.141 11:14:26 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.141 11:14:26 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.141 11:14:26 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.141 11:14:26 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.141 11:14:26 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.141 11:14:26 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.141 11:14:26 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.141 11:14:26 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.141 11:14:26 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.141 11:14:26 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.141 11:14:26 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.142 11:14:26 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:59.142 11:14:26 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.142 11:14:26 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:59.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.142 --rc genhtml_branch_coverage=1 00:06:59.142 --rc genhtml_function_coverage=1 00:06:59.142 --rc genhtml_legend=1 00:06:59.142 --rc geninfo_all_blocks=1 00:06:59.142 --rc geninfo_unexecuted_blocks=1 00:06:59.142 00:06:59.142 ' 00:06:59.142 11:14:26 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:59.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.142 --rc genhtml_branch_coverage=1 00:06:59.142 --rc genhtml_function_coverage=1 00:06:59.142 --rc genhtml_legend=1 00:06:59.142 --rc geninfo_all_blocks=1 00:06:59.142 --rc geninfo_unexecuted_blocks=1 00:06:59.142 00:06:59.142 ' 00:06:59.142 11:14:26 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:59.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.142 --rc genhtml_branch_coverage=1 00:06:59.142 --rc genhtml_function_coverage=1 00:06:59.142 --rc genhtml_legend=1 00:06:59.142 --rc geninfo_all_blocks=1 00:06:59.142 --rc geninfo_unexecuted_blocks=1 00:06:59.142 00:06:59.142 ' 00:06:59.142 11:14:26 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:59.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.142 --rc genhtml_branch_coverage=1 00:06:59.142 --rc genhtml_function_coverage=1 00:06:59.142 --rc genhtml_legend=1 00:06:59.142 --rc geninfo_all_blocks=1 00:06:59.142 --rc geninfo_unexecuted_blocks=1 00:06:59.142 00:06:59.142 ' 00:06:59.142 11:14:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:59.142 11:14:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:59.142 11:14:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:59.142 11:14:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.142 11:14:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.142 11:14:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.142 ************************************ 00:06:59.142 START TEST skip_rpc 00:06:59.142 ************************************ 00:06:59.142 11:14:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:59.142 11:14:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58097 00:06:59.142 11:14:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:59.142 11:14:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.142 11:14:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:59.401 [2024-12-10 11:14:26.325049] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:59.401 [2024-12-10 11:14:26.325866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58097 ] 00:06:59.401 [2024-12-10 11:14:26.510289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.660 [2024-12-10 11:14:26.631629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58097 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58097 ']' 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58097 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58097 00:07:04.933 killing process with pid 58097 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58097' 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58097 00:07:04.933 11:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58097 00:07:06.837 00:07:06.837 real 0m7.472s 00:07:06.837 user 0m6.941s 00:07:06.837 sys 0m0.447s 00:07:06.837 11:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.837 11:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.837 ************************************ 00:07:06.837 END TEST skip_rpc 00:07:06.837 ************************************ 00:07:06.838 11:14:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:06.838 11:14:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.838 11:14:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.838 11:14:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.838 ************************************ 00:07:06.838 START TEST skip_rpc_with_json 00:07:06.838 ************************************ 00:07:06.838 11:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:06.838 11:14:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:06.838 11:14:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.838 11:14:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58201 00:07:06.838 11:14:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:06.838 11:14:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58201 00:07:06.838 11:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58201 ']' 00:07:06.838 11:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.838 11:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.838 11:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.838 11:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.838 11:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:06.838 [2024-12-10 11:14:33.880230] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:06.838 [2024-12-10 11:14:33.880445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58201 ] 00:07:07.097 [2024-12-10 11:14:34.079505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.097 [2024-12-10 11:14:34.196178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.035 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.035 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:08.035 11:14:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:08.035 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.035 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:08.035 [2024-12-10 11:14:35.072931] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:08.035 request: 00:07:08.035 { 00:07:08.035 "trtype": "tcp", 00:07:08.035 "method": "nvmf_get_transports", 00:07:08.035 "req_id": 1 00:07:08.035 } 00:07:08.035 Got JSON-RPC error response 00:07:08.035 response: 00:07:08.035 { 00:07:08.035 "code": -19, 00:07:08.035 "message": "No such device" 00:07:08.035 } 00:07:08.035 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:08.035 11:14:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:08.035 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.035 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:08.035 [2024-12-10 11:14:35.089037] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.035 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.035 11:14:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:08.035 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.035 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:08.297 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.297 11:14:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:08.297 { 00:07:08.297 "subsystems": [ 00:07:08.297 { 00:07:08.297 "subsystem": "fsdev", 00:07:08.297 "config": [ 00:07:08.297 { 00:07:08.297 "method": "fsdev_set_opts", 00:07:08.297 "params": { 00:07:08.297 "fsdev_io_pool_size": 65535, 00:07:08.297 "fsdev_io_cache_size": 256 00:07:08.297 } 00:07:08.297 } 00:07:08.297 ] 00:07:08.297 }, 00:07:08.297 { 00:07:08.297 "subsystem": "keyring", 00:07:08.297 "config": [] 00:07:08.297 }, 00:07:08.297 { 00:07:08.298 "subsystem": "iobuf", 00:07:08.298 "config": [ 00:07:08.298 { 00:07:08.298 "method": "iobuf_set_options", 00:07:08.298 "params": { 00:07:08.298 "small_pool_count": 8192, 00:07:08.298 "large_pool_count": 1024, 00:07:08.298 "small_bufsize": 8192, 00:07:08.298 "large_bufsize": 135168, 00:07:08.298 "enable_numa": false 00:07:08.298 } 00:07:08.298 } 00:07:08.298 ] 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "subsystem": "sock", 00:07:08.298 "config": [ 00:07:08.298 { 00:07:08.298 "method": "sock_set_default_impl", 00:07:08.298 "params": { 00:07:08.298 "impl_name": "posix" 00:07:08.298 } 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "method": "sock_impl_set_options", 00:07:08.298 "params": { 00:07:08.298 "impl_name": "ssl", 00:07:08.298 "recv_buf_size": 4096, 00:07:08.298 "send_buf_size": 4096, 00:07:08.298 "enable_recv_pipe": true, 00:07:08.298 "enable_quickack": false, 00:07:08.298 "enable_placement_id": 0, 00:07:08.298 "enable_zerocopy_send_server": true, 00:07:08.298 "enable_zerocopy_send_client": false, 00:07:08.298 "zerocopy_threshold": 0, 00:07:08.298 "tls_version": 0, 00:07:08.298 "enable_ktls": false 00:07:08.298 } 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "method": "sock_impl_set_options", 00:07:08.298 "params": { 00:07:08.298 "impl_name": "posix", 00:07:08.298 "recv_buf_size": 2097152, 00:07:08.298 "send_buf_size": 2097152, 00:07:08.298 "enable_recv_pipe": true, 00:07:08.298 "enable_quickack": false, 00:07:08.298 "enable_placement_id": 0, 00:07:08.298 "enable_zerocopy_send_server": true, 00:07:08.298 "enable_zerocopy_send_client": false, 00:07:08.298 "zerocopy_threshold": 0, 00:07:08.298 "tls_version": 0, 00:07:08.298 "enable_ktls": false 00:07:08.298 } 00:07:08.298 } 00:07:08.298 ] 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "subsystem": "vmd", 00:07:08.298 "config": [] 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "subsystem": "accel", 00:07:08.298 "config": [ 00:07:08.298 { 00:07:08.298 "method": "accel_set_options", 00:07:08.298 "params": { 00:07:08.298 "small_cache_size": 128, 00:07:08.298 "large_cache_size": 16, 00:07:08.298 "task_count": 2048, 00:07:08.298 "sequence_count": 2048, 00:07:08.298 "buf_count": 2048 00:07:08.298 } 00:07:08.298 } 00:07:08.298 ] 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "subsystem": "bdev", 00:07:08.298 "config": [ 00:07:08.298 { 00:07:08.298 "method": "bdev_set_options", 00:07:08.298 "params": { 00:07:08.298 "bdev_io_pool_size": 65535, 00:07:08.298 "bdev_io_cache_size": 256, 00:07:08.298 "bdev_auto_examine": true, 00:07:08.298 "iobuf_small_cache_size": 128, 00:07:08.298 "iobuf_large_cache_size": 16 00:07:08.298 } 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "method": "bdev_raid_set_options", 00:07:08.298 "params": { 00:07:08.298 "process_window_size_kb": 1024, 00:07:08.298 "process_max_bandwidth_mb_sec": 0 00:07:08.298 } 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "method": "bdev_iscsi_set_options", 00:07:08.298 "params": { 00:07:08.298 "timeout_sec": 30 00:07:08.298 } 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "method": "bdev_nvme_set_options", 00:07:08.298 "params": { 00:07:08.298 "action_on_timeout": "none", 00:07:08.298 "timeout_us": 0, 00:07:08.298 "timeout_admin_us": 0, 00:07:08.298 "keep_alive_timeout_ms": 10000, 00:07:08.298 "arbitration_burst": 0, 00:07:08.298 "low_priority_weight": 0, 00:07:08.298 "medium_priority_weight": 0, 00:07:08.298 "high_priority_weight": 0, 00:07:08.298 "nvme_adminq_poll_period_us": 10000, 00:07:08.298 "nvme_ioq_poll_period_us": 0, 00:07:08.298 "io_queue_requests": 0, 00:07:08.298 "delay_cmd_submit": true, 00:07:08.298 "transport_retry_count": 4, 00:07:08.298 "bdev_retry_count": 3, 00:07:08.298 "transport_ack_timeout": 0, 00:07:08.298 "ctrlr_loss_timeout_sec": 0, 00:07:08.298 "reconnect_delay_sec": 0, 00:07:08.298 "fast_io_fail_timeout_sec": 0, 00:07:08.298 "disable_auto_failback": false, 00:07:08.298 "generate_uuids": false, 00:07:08.298 "transport_tos": 0, 00:07:08.298 "nvme_error_stat": false, 00:07:08.298 "rdma_srq_size": 0, 00:07:08.298 "io_path_stat": false, 00:07:08.298 "allow_accel_sequence": false, 00:07:08.298 "rdma_max_cq_size": 0, 00:07:08.298 "rdma_cm_event_timeout_ms": 0, 00:07:08.298 "dhchap_digests": [ 00:07:08.298 "sha256", 00:07:08.298 "sha384", 00:07:08.298 "sha512" 00:07:08.298 ], 00:07:08.298 "dhchap_dhgroups": [ 00:07:08.298 "null", 00:07:08.298 "ffdhe2048", 00:07:08.298 "ffdhe3072", 00:07:08.298 "ffdhe4096", 00:07:08.298 "ffdhe6144", 00:07:08.298 "ffdhe8192" 00:07:08.298 ] 00:07:08.298 } 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "method": "bdev_nvme_set_hotplug", 00:07:08.298 "params": { 00:07:08.298 "period_us": 100000, 00:07:08.298 "enable": false 00:07:08.298 } 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "method": "bdev_wait_for_examine" 00:07:08.298 } 00:07:08.298 ] 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "subsystem": "scsi", 00:07:08.298 "config": null 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "subsystem": "scheduler", 00:07:08.298 "config": [ 00:07:08.298 { 00:07:08.298 "method": "framework_set_scheduler", 00:07:08.298 "params": { 00:07:08.298 "name": "static" 00:07:08.298 } 00:07:08.298 } 00:07:08.298 ] 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "subsystem": "vhost_scsi", 00:07:08.298 "config": [] 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "subsystem": "vhost_blk", 00:07:08.298 "config": [] 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "subsystem": "ublk", 00:07:08.298 "config": [] 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "subsystem": "nbd", 00:07:08.298 "config": [] 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "subsystem": "nvmf", 00:07:08.298 "config": [ 00:07:08.298 { 00:07:08.298 "method": "nvmf_set_config", 00:07:08.298 "params": { 00:07:08.298 "discovery_filter": "match_any", 00:07:08.298 "admin_cmd_passthru": { 00:07:08.298 "identify_ctrlr": false 00:07:08.298 }, 00:07:08.298 "dhchap_digests": [ 00:07:08.298 "sha256", 00:07:08.298 "sha384", 00:07:08.298 "sha512" 00:07:08.298 ], 00:07:08.298 "dhchap_dhgroups": [ 00:07:08.298 "null", 00:07:08.298 "ffdhe2048", 00:07:08.298 "ffdhe3072", 00:07:08.298 "ffdhe4096", 00:07:08.298 "ffdhe6144", 00:07:08.298 "ffdhe8192" 00:07:08.298 ] 00:07:08.298 } 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "method": "nvmf_set_max_subsystems", 00:07:08.298 "params": { 00:07:08.298 "max_subsystems": 1024 00:07:08.298 } 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "method": "nvmf_set_crdt", 00:07:08.298 "params": { 00:07:08.298 "crdt1": 0, 00:07:08.298 "crdt2": 0, 00:07:08.298 "crdt3": 0 00:07:08.298 } 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "method": "nvmf_create_transport", 00:07:08.298 "params": { 00:07:08.298 "trtype": "TCP", 00:07:08.298 "max_queue_depth": 128, 00:07:08.298 "max_io_qpairs_per_ctrlr": 127, 00:07:08.298 "in_capsule_data_size": 4096, 00:07:08.298 "max_io_size": 131072, 00:07:08.298 "io_unit_size": 131072, 00:07:08.298 "max_aq_depth": 128, 00:07:08.298 "num_shared_buffers": 511, 00:07:08.298 "buf_cache_size": 4294967295, 00:07:08.298 "dif_insert_or_strip": false, 00:07:08.298 "zcopy": false, 00:07:08.298 "c2h_success": true, 00:07:08.298 "sock_priority": 0, 00:07:08.298 "abort_timeout_sec": 1, 00:07:08.298 "ack_timeout": 0, 00:07:08.298 "data_wr_pool_size": 0 00:07:08.298 } 00:07:08.298 } 00:07:08.298 ] 00:07:08.298 }, 00:07:08.298 { 00:07:08.298 "subsystem": "iscsi", 00:07:08.298 "config": [ 00:07:08.298 { 00:07:08.298 "method": "iscsi_set_options", 00:07:08.298 "params": { 00:07:08.298 "node_base": "iqn.2016-06.io.spdk", 00:07:08.298 "max_sessions": 128, 00:07:08.298 "max_connections_per_session": 2, 00:07:08.298 "max_queue_depth": 64, 00:07:08.298 "default_time2wait": 2, 00:07:08.298 "default_time2retain": 20, 00:07:08.298 "first_burst_length": 8192, 00:07:08.298 "immediate_data": true, 00:07:08.298 "allow_duplicated_isid": false, 00:07:08.298 "error_recovery_level": 0, 00:07:08.298 "nop_timeout": 60, 00:07:08.298 "nop_in_interval": 30, 00:07:08.298 "disable_chap": false, 00:07:08.298 "require_chap": false, 00:07:08.298 "mutual_chap": false, 00:07:08.298 "chap_group": 0, 00:07:08.298 "max_large_datain_per_connection": 64, 00:07:08.298 "max_r2t_per_connection": 4, 00:07:08.298 "pdu_pool_size": 36864, 00:07:08.298 "immediate_data_pool_size": 16384, 00:07:08.298 "data_out_pool_size": 2048 00:07:08.298 } 00:07:08.298 } 00:07:08.298 ] 00:07:08.298 } 00:07:08.298 ] 00:07:08.298 } 00:07:08.298 11:14:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:08.298 11:14:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58201 00:07:08.298 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58201 ']' 00:07:08.298 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58201 00:07:08.298 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:08.298 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.298 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58201 00:07:08.298 killing process with pid 58201 00:07:08.298 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.298 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.298 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58201' 00:07:08.298 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58201 00:07:08.298 11:14:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58201 00:07:10.831 11:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58257 00:07:10.831 11:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:10.831 11:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:16.127 11:14:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58257 00:07:16.127 11:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58257 ']' 00:07:16.127 11:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58257 00:07:16.127 11:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:16.127 11:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.127 11:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58257 00:07:16.127 killing process with pid 58257 00:07:16.127 11:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.127 11:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.127 11:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58257' 00:07:16.128 11:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58257 00:07:16.128 11:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58257 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:18.663 00:07:18.663 real 0m11.431s 00:07:18.663 user 0m10.788s 00:07:18.663 sys 0m0.966s 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.663 ************************************ 00:07:18.663 END TEST skip_rpc_with_json 00:07:18.663 ************************************ 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:18.663 11:14:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:18.663 11:14:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.663 11:14:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.663 11:14:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.663 ************************************ 00:07:18.663 START TEST skip_rpc_with_delay 00:07:18.663 ************************************ 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:18.663 [2024-12-10 11:14:45.384313] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.663 00:07:18.663 real 0m0.197s 00:07:18.663 user 0m0.098s 00:07:18.663 sys 0m0.098s 00:07:18.663 ************************************ 00:07:18.663 END TEST skip_rpc_with_delay 00:07:18.663 ************************************ 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.663 11:14:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:18.663 11:14:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:18.663 11:14:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:18.663 11:14:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:18.663 11:14:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.663 11:14:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.663 11:14:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.663 ************************************ 00:07:18.663 START TEST exit_on_failed_rpc_init 00:07:18.663 ************************************ 00:07:18.663 11:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:18.663 11:14:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58385 00:07:18.663 11:14:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.663 11:14:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58385 00:07:18.663 11:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58385 ']' 00:07:18.663 11:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.663 11:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.663 11:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.663 11:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.663 11:14:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:18.663 [2024-12-10 11:14:45.654667] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:18.663 [2024-12-10 11:14:45.654798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58385 ] 00:07:18.922 [2024-12-10 11:14:45.837735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.922 [2024-12-10 11:14:45.958466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.878 11:14:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.878 11:14:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:19.878 11:14:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:19.878 11:14:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:19.878 11:14:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:19.878 11:14:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:19.878 11:14:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:19.878 11:14:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.878 11:14:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:19.878 11:14:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.878 11:14:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:19.878 11:14:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.878 11:14:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:19.878 11:14:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:19.878 11:14:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:19.878 [2024-12-10 11:14:46.970358] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:19.878 [2024-12-10 11:14:46.970481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58414 ] 00:07:20.137 [2024-12-10 11:14:47.155531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.396 [2024-12-10 11:14:47.264376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.396 [2024-12-10 11:14:47.264478] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:20.396 [2024-12-10 11:14:47.264496] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:20.396 [2024-12-10 11:14:47.264515] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58385 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58385 ']' 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58385 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58385 00:07:20.655 killing process with pid 58385 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58385' 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58385 00:07:20.655 11:14:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58385 00:07:23.190 00:07:23.190 real 0m4.428s 00:07:23.190 user 0m4.714s 00:07:23.190 sys 0m0.611s 00:07:23.190 11:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.190 ************************************ 00:07:23.190 END TEST exit_on_failed_rpc_init 00:07:23.190 ************************************ 00:07:23.190 11:14:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:23.190 11:14:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:23.190 ************************************ 00:07:23.190 END TEST skip_rpc 00:07:23.190 ************************************ 00:07:23.190 00:07:23.190 real 0m24.062s 00:07:23.190 user 0m22.767s 00:07:23.190 sys 0m2.424s 00:07:23.190 11:14:50 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.190 11:14:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.190 11:14:50 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:23.190 11:14:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.190 11:14:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.190 11:14:50 -- common/autotest_common.sh@10 -- # set +x 00:07:23.190 ************************************ 00:07:23.190 START TEST rpc_client 00:07:23.190 ************************************ 00:07:23.190 11:14:50 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:23.190 * Looking for test storage... 00:07:23.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:23.190 11:14:50 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:23.190 11:14:50 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:07:23.190 11:14:50 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:23.449 11:14:50 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.449 11:14:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:23.449 11:14:50 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.449 11:14:50 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:23.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.449 --rc genhtml_branch_coverage=1 00:07:23.449 --rc genhtml_function_coverage=1 00:07:23.449 --rc genhtml_legend=1 00:07:23.449 --rc geninfo_all_blocks=1 00:07:23.449 --rc geninfo_unexecuted_blocks=1 00:07:23.449 00:07:23.449 ' 00:07:23.449 11:14:50 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:23.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.449 --rc genhtml_branch_coverage=1 00:07:23.450 --rc genhtml_function_coverage=1 00:07:23.450 --rc genhtml_legend=1 00:07:23.450 --rc geninfo_all_blocks=1 00:07:23.450 --rc geninfo_unexecuted_blocks=1 00:07:23.450 00:07:23.450 ' 00:07:23.450 11:14:50 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:23.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.450 --rc genhtml_branch_coverage=1 00:07:23.450 --rc genhtml_function_coverage=1 00:07:23.450 --rc genhtml_legend=1 00:07:23.450 --rc geninfo_all_blocks=1 00:07:23.450 --rc geninfo_unexecuted_blocks=1 00:07:23.450 00:07:23.450 ' 00:07:23.450 11:14:50 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:23.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.450 --rc genhtml_branch_coverage=1 00:07:23.450 --rc genhtml_function_coverage=1 00:07:23.450 --rc genhtml_legend=1 00:07:23.450 --rc geninfo_all_blocks=1 00:07:23.450 --rc geninfo_unexecuted_blocks=1 00:07:23.450 00:07:23.450 ' 00:07:23.450 11:14:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:23.450 OK 00:07:23.450 11:14:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:23.450 00:07:23.450 real 0m0.310s 00:07:23.450 user 0m0.163s 00:07:23.450 sys 0m0.160s 00:07:23.450 11:14:50 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.450 11:14:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:23.450 ************************************ 00:07:23.450 END TEST rpc_client 00:07:23.450 ************************************ 00:07:23.450 11:14:50 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:23.450 11:14:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.450 11:14:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.450 11:14:50 -- common/autotest_common.sh@10 -- # set +x 00:07:23.450 ************************************ 00:07:23.450 START TEST json_config 00:07:23.450 ************************************ 00:07:23.450 11:14:50 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:23.709 11:14:50 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:23.709 11:14:50 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:07:23.709 11:14:50 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:23.709 11:14:50 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:23.709 11:14:50 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.710 11:14:50 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.710 11:14:50 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.710 11:14:50 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.710 11:14:50 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.710 11:14:50 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.710 11:14:50 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.710 11:14:50 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.710 11:14:50 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.710 11:14:50 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.710 11:14:50 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.710 11:14:50 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:23.710 11:14:50 json_config -- scripts/common.sh@345 -- # : 1 00:07:23.710 11:14:50 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.710 11:14:50 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.710 11:14:50 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:23.710 11:14:50 json_config -- scripts/common.sh@353 -- # local d=1 00:07:23.710 11:14:50 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.710 11:14:50 json_config -- scripts/common.sh@355 -- # echo 1 00:07:23.710 11:14:50 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.710 11:14:50 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:23.710 11:14:50 json_config -- scripts/common.sh@353 -- # local d=2 00:07:23.710 11:14:50 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.710 11:14:50 json_config -- scripts/common.sh@355 -- # echo 2 00:07:23.710 11:14:50 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.710 11:14:50 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.710 11:14:50 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.710 11:14:50 json_config -- scripts/common.sh@368 -- # return 0 00:07:23.710 11:14:50 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.710 11:14:50 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:23.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.710 --rc genhtml_branch_coverage=1 00:07:23.710 --rc genhtml_function_coverage=1 00:07:23.710 --rc genhtml_legend=1 00:07:23.710 --rc geninfo_all_blocks=1 00:07:23.710 --rc geninfo_unexecuted_blocks=1 00:07:23.710 00:07:23.710 ' 00:07:23.710 11:14:50 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:23.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.710 --rc genhtml_branch_coverage=1 00:07:23.710 --rc genhtml_function_coverage=1 00:07:23.710 --rc genhtml_legend=1 00:07:23.710 --rc geninfo_all_blocks=1 00:07:23.710 --rc geninfo_unexecuted_blocks=1 00:07:23.710 00:07:23.710 ' 00:07:23.710 11:14:50 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:23.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.710 --rc genhtml_branch_coverage=1 00:07:23.710 --rc genhtml_function_coverage=1 00:07:23.710 --rc genhtml_legend=1 00:07:23.710 --rc geninfo_all_blocks=1 00:07:23.710 --rc geninfo_unexecuted_blocks=1 00:07:23.710 00:07:23.710 ' 00:07:23.710 11:14:50 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:23.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.710 --rc genhtml_branch_coverage=1 00:07:23.710 --rc genhtml_function_coverage=1 00:07:23.710 --rc genhtml_legend=1 00:07:23.710 --rc geninfo_all_blocks=1 00:07:23.710 --rc geninfo_unexecuted_blocks=1 00:07:23.710 00:07:23.710 ' 00:07:23.710 11:14:50 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d373939e-959a-48c7-a724-02880d24a783 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d373939e-959a-48c7-a724-02880d24a783 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:23.710 11:14:50 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:23.710 11:14:50 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.710 11:14:50 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.710 11:14:50 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.710 11:14:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.710 11:14:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.710 11:14:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.710 11:14:50 json_config -- paths/export.sh@5 -- # export PATH 00:07:23.710 11:14:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@51 -- # : 0 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:23.710 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:23.710 11:14:50 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:23.710 11:14:50 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:23.710 11:14:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:23.710 11:14:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:23.710 11:14:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:23.710 11:14:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:23.710 11:14:50 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:23.710 WARNING: No tests are enabled so not running JSON configuration tests 00:07:23.710 11:14:50 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:23.710 00:07:23.710 real 0m0.229s 00:07:23.710 user 0m0.129s 00:07:23.710 sys 0m0.096s 00:07:23.710 11:14:50 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.710 11:14:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.710 ************************************ 00:07:23.710 END TEST json_config 00:07:23.710 ************************************ 00:07:23.710 11:14:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:23.710 11:14:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.710 11:14:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.710 11:14:50 -- common/autotest_common.sh@10 -- # set +x 00:07:23.710 ************************************ 00:07:23.710 START TEST json_config_extra_key 00:07:23.710 ************************************ 00:07:23.710 11:14:50 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:23.970 11:14:50 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:23.970 11:14:50 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:07:23.970 11:14:50 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:23.970 11:14:50 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.970 11:14:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:23.970 11:14:50 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.970 11:14:50 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:23.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.970 --rc genhtml_branch_coverage=1 00:07:23.970 --rc genhtml_function_coverage=1 00:07:23.970 --rc genhtml_legend=1 00:07:23.970 --rc geninfo_all_blocks=1 00:07:23.970 --rc geninfo_unexecuted_blocks=1 00:07:23.970 00:07:23.970 ' 00:07:23.970 11:14:50 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:23.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.970 --rc genhtml_branch_coverage=1 00:07:23.970 --rc genhtml_function_coverage=1 00:07:23.970 --rc genhtml_legend=1 00:07:23.970 --rc geninfo_all_blocks=1 00:07:23.970 --rc geninfo_unexecuted_blocks=1 00:07:23.970 00:07:23.970 ' 00:07:23.970 11:14:50 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:23.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.970 --rc genhtml_branch_coverage=1 00:07:23.970 --rc genhtml_function_coverage=1 00:07:23.970 --rc genhtml_legend=1 00:07:23.970 --rc geninfo_all_blocks=1 00:07:23.970 --rc geninfo_unexecuted_blocks=1 00:07:23.970 00:07:23.970 ' 00:07:23.970 11:14:50 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:23.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.970 --rc genhtml_branch_coverage=1 00:07:23.970 --rc genhtml_function_coverage=1 00:07:23.970 --rc genhtml_legend=1 00:07:23.970 --rc geninfo_all_blocks=1 00:07:23.970 --rc geninfo_unexecuted_blocks=1 00:07:23.970 00:07:23.970 ' 00:07:23.970 11:14:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:23.970 11:14:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:23.970 11:14:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.970 11:14:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.970 11:14:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.970 11:14:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.970 11:14:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.970 11:14:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.970 11:14:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.970 11:14:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.970 11:14:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.970 11:14:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d373939e-959a-48c7-a724-02880d24a783 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d373939e-959a-48c7-a724-02880d24a783 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:23.970 11:14:51 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:23.970 11:14:51 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.970 11:14:51 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.970 11:14:51 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.970 11:14:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.970 11:14:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.970 11:14:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.970 11:14:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:23.970 11:14:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:23.970 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:23.970 11:14:51 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:23.970 11:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:23.970 11:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:23.970 11:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:23.970 11:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:23.971 11:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:23.971 11:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:23.971 11:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:23.971 11:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:23.971 11:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:23.971 11:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:23.971 INFO: launching applications... 00:07:23.971 11:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:23.971 11:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:23.971 11:14:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:23.971 11:14:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:23.971 11:14:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:23.971 11:14:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:23.971 11:14:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:23.971 11:14:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:23.971 11:14:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:23.971 11:14:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58624 00:07:23.971 11:14:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:23.971 Waiting for target to run... 00:07:23.971 11:14:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58624 /var/tmp/spdk_tgt.sock 00:07:23.971 11:14:51 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:23.971 11:14:51 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58624 ']' 00:07:23.971 11:14:51 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:23.971 11:14:51 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:23.971 11:14:51 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:23.971 11:14:51 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.971 11:14:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:24.229 [2024-12-10 11:14:51.142454] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:24.229 [2024-12-10 11:14:51.142584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58624 ] 00:07:24.488 [2024-12-10 11:14:51.540313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.746 [2024-12-10 11:14:51.651569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.313 00:07:25.313 INFO: shutting down applications... 00:07:25.313 11:14:52 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.313 11:14:52 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:25.313 11:14:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:25.313 11:14:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:25.313 11:14:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:25.313 11:14:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:25.313 11:14:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:25.313 11:14:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58624 ]] 00:07:25.313 11:14:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58624 00:07:25.313 11:14:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:25.313 11:14:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:25.313 11:14:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58624 00:07:25.313 11:14:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:25.879 11:14:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:25.879 11:14:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:25.879 11:14:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58624 00:07:25.879 11:14:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:26.443 11:14:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:26.443 11:14:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:26.443 11:14:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58624 00:07:26.443 11:14:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:27.040 11:14:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:27.040 11:14:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:27.040 11:14:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58624 00:07:27.040 11:14:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:27.299 11:14:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:27.299 11:14:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:27.299 11:14:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58624 00:07:27.299 11:14:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:27.866 11:14:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:27.866 11:14:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:27.866 11:14:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58624 00:07:27.866 11:14:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:28.434 11:14:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:28.434 11:14:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:28.434 11:14:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58624 00:07:28.434 11:14:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:28.434 11:14:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:28.434 11:14:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:28.434 SPDK target shutdown done 00:07:28.434 11:14:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:28.434 Success 00:07:28.434 11:14:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:28.434 00:07:28.434 real 0m4.610s 00:07:28.434 user 0m4.032s 00:07:28.434 sys 0m0.607s 00:07:28.434 ************************************ 00:07:28.434 END TEST json_config_extra_key 00:07:28.434 ************************************ 00:07:28.434 11:14:55 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.434 11:14:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:28.434 11:14:55 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:28.434 11:14:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.434 11:14:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.434 11:14:55 -- common/autotest_common.sh@10 -- # set +x 00:07:28.434 ************************************ 00:07:28.434 START TEST alias_rpc 00:07:28.434 ************************************ 00:07:28.434 11:14:55 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:28.692 * Looking for test storage... 00:07:28.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:28.692 11:14:55 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:28.692 11:14:55 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:28.692 11:14:55 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:28.692 11:14:55 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.692 11:14:55 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:28.692 11:14:55 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.692 11:14:55 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:28.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.692 --rc genhtml_branch_coverage=1 00:07:28.692 --rc genhtml_function_coverage=1 00:07:28.692 --rc genhtml_legend=1 00:07:28.692 --rc geninfo_all_blocks=1 00:07:28.692 --rc geninfo_unexecuted_blocks=1 00:07:28.692 00:07:28.692 ' 00:07:28.692 11:14:55 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:28.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.692 --rc genhtml_branch_coverage=1 00:07:28.692 --rc genhtml_function_coverage=1 00:07:28.692 --rc genhtml_legend=1 00:07:28.692 --rc geninfo_all_blocks=1 00:07:28.692 --rc geninfo_unexecuted_blocks=1 00:07:28.692 00:07:28.692 ' 00:07:28.692 11:14:55 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:28.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.692 --rc genhtml_branch_coverage=1 00:07:28.692 --rc genhtml_function_coverage=1 00:07:28.692 --rc genhtml_legend=1 00:07:28.692 --rc geninfo_all_blocks=1 00:07:28.692 --rc geninfo_unexecuted_blocks=1 00:07:28.692 00:07:28.692 ' 00:07:28.692 11:14:55 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:28.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.692 --rc genhtml_branch_coverage=1 00:07:28.692 --rc genhtml_function_coverage=1 00:07:28.692 --rc genhtml_legend=1 00:07:28.692 --rc geninfo_all_blocks=1 00:07:28.692 --rc geninfo_unexecuted_blocks=1 00:07:28.692 00:07:28.692 ' 00:07:28.692 11:14:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:28.692 11:14:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:28.692 11:14:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58730 00:07:28.692 11:14:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58730 00:07:28.692 11:14:55 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58730 ']' 00:07:28.692 11:14:55 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.692 11:14:55 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.692 11:14:55 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.692 11:14:55 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.692 11:14:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.951 [2024-12-10 11:14:55.823411] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:28.951 [2024-12-10 11:14:55.823737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58730 ] 00:07:28.951 [2024-12-10 11:14:56.006271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.210 [2024-12-10 11:14:56.128145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.147 11:14:57 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.147 11:14:57 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:30.147 11:14:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:30.147 11:14:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58730 00:07:30.147 11:14:57 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58730 ']' 00:07:30.147 11:14:57 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58730 00:07:30.406 11:14:57 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:30.406 11:14:57 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.406 11:14:57 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58730 00:07:30.406 killing process with pid 58730 00:07:30.406 11:14:57 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.406 11:14:57 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.406 11:14:57 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58730' 00:07:30.406 11:14:57 alias_rpc -- common/autotest_common.sh@973 -- # kill 58730 00:07:30.406 11:14:57 alias_rpc -- common/autotest_common.sh@978 -- # wait 58730 00:07:32.941 ************************************ 00:07:32.941 END TEST alias_rpc 00:07:32.941 ************************************ 00:07:32.941 00:07:32.941 real 0m4.242s 00:07:32.941 user 0m4.177s 00:07:32.941 sys 0m0.636s 00:07:32.941 11:14:59 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.941 11:14:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.941 11:14:59 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:32.941 11:14:59 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:32.941 11:14:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.941 11:14:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.941 11:14:59 -- common/autotest_common.sh@10 -- # set +x 00:07:32.941 ************************************ 00:07:32.941 START TEST spdkcli_tcp 00:07:32.941 ************************************ 00:07:32.941 11:14:59 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:32.941 * Looking for test storage... 00:07:32.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:32.941 11:14:59 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:32.941 11:14:59 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:32.941 11:14:59 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:32.941 11:14:59 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:32.941 11:14:59 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.941 11:15:00 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:32.941 11:15:00 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.941 11:15:00 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:32.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.941 --rc genhtml_branch_coverage=1 00:07:32.941 --rc genhtml_function_coverage=1 00:07:32.941 --rc genhtml_legend=1 00:07:32.941 --rc geninfo_all_blocks=1 00:07:32.941 --rc geninfo_unexecuted_blocks=1 00:07:32.941 00:07:32.941 ' 00:07:32.941 11:15:00 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:32.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.941 --rc genhtml_branch_coverage=1 00:07:32.941 --rc genhtml_function_coverage=1 00:07:32.941 --rc genhtml_legend=1 00:07:32.941 --rc geninfo_all_blocks=1 00:07:32.941 --rc geninfo_unexecuted_blocks=1 00:07:32.941 00:07:32.941 ' 00:07:32.941 11:15:00 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:32.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.941 --rc genhtml_branch_coverage=1 00:07:32.941 --rc genhtml_function_coverage=1 00:07:32.941 --rc genhtml_legend=1 00:07:32.941 --rc geninfo_all_blocks=1 00:07:32.941 --rc geninfo_unexecuted_blocks=1 00:07:32.941 00:07:32.941 ' 00:07:32.941 11:15:00 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:32.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.941 --rc genhtml_branch_coverage=1 00:07:32.941 --rc genhtml_function_coverage=1 00:07:32.941 --rc genhtml_legend=1 00:07:32.941 --rc geninfo_all_blocks=1 00:07:32.941 --rc geninfo_unexecuted_blocks=1 00:07:32.941 00:07:32.941 ' 00:07:32.941 11:15:00 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:32.941 11:15:00 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:32.941 11:15:00 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:32.941 11:15:00 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:32.941 11:15:00 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:32.941 11:15:00 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:32.941 11:15:00 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:32.941 11:15:00 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.941 11:15:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:32.941 11:15:00 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58843 00:07:32.941 11:15:00 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58843 00:07:32.941 11:15:00 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:32.941 11:15:00 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58843 ']' 00:07:32.941 11:15:00 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.941 11:15:00 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.941 11:15:00 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.941 11:15:00 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.942 11:15:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.201 [2024-12-10 11:15:00.148889] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:33.201 [2024-12-10 11:15:00.149021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58843 ] 00:07:33.460 [2024-12-10 11:15:00.334104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.460 [2024-12-10 11:15:00.450820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.460 [2024-12-10 11:15:00.450856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.394 11:15:01 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.394 11:15:01 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:34.394 11:15:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58860 00:07:34.394 11:15:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:34.394 11:15:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:34.653 [ 00:07:34.653 "bdev_malloc_delete", 00:07:34.653 "bdev_malloc_create", 00:07:34.653 "bdev_null_resize", 00:07:34.653 "bdev_null_delete", 00:07:34.653 "bdev_null_create", 00:07:34.653 "bdev_nvme_cuse_unregister", 00:07:34.653 "bdev_nvme_cuse_register", 00:07:34.653 "bdev_opal_new_user", 00:07:34.653 "bdev_opal_set_lock_state", 00:07:34.653 "bdev_opal_delete", 00:07:34.653 "bdev_opal_get_info", 00:07:34.653 "bdev_opal_create", 00:07:34.653 "bdev_nvme_opal_revert", 00:07:34.653 "bdev_nvme_opal_init", 00:07:34.653 "bdev_nvme_send_cmd", 00:07:34.653 "bdev_nvme_set_keys", 00:07:34.653 "bdev_nvme_get_path_iostat", 00:07:34.653 "bdev_nvme_get_mdns_discovery_info", 00:07:34.653 "bdev_nvme_stop_mdns_discovery", 00:07:34.653 "bdev_nvme_start_mdns_discovery", 00:07:34.653 "bdev_nvme_set_multipath_policy", 00:07:34.653 "bdev_nvme_set_preferred_path", 00:07:34.653 "bdev_nvme_get_io_paths", 00:07:34.653 "bdev_nvme_remove_error_injection", 00:07:34.653 "bdev_nvme_add_error_injection", 00:07:34.653 "bdev_nvme_get_discovery_info", 00:07:34.653 "bdev_nvme_stop_discovery", 00:07:34.653 "bdev_nvme_start_discovery", 00:07:34.653 "bdev_nvme_get_controller_health_info", 00:07:34.653 "bdev_nvme_disable_controller", 00:07:34.653 "bdev_nvme_enable_controller", 00:07:34.653 "bdev_nvme_reset_controller", 00:07:34.653 "bdev_nvme_get_transport_statistics", 00:07:34.653 "bdev_nvme_apply_firmware", 00:07:34.653 "bdev_nvme_detach_controller", 00:07:34.653 "bdev_nvme_get_controllers", 00:07:34.653 "bdev_nvme_attach_controller", 00:07:34.653 "bdev_nvme_set_hotplug", 00:07:34.653 "bdev_nvme_set_options", 00:07:34.653 "bdev_passthru_delete", 00:07:34.653 "bdev_passthru_create", 00:07:34.653 "bdev_lvol_set_parent_bdev", 00:07:34.653 "bdev_lvol_set_parent", 00:07:34.653 "bdev_lvol_check_shallow_copy", 00:07:34.653 "bdev_lvol_start_shallow_copy", 00:07:34.653 "bdev_lvol_grow_lvstore", 00:07:34.653 "bdev_lvol_get_lvols", 00:07:34.653 "bdev_lvol_get_lvstores", 00:07:34.653 "bdev_lvol_delete", 00:07:34.653 "bdev_lvol_set_read_only", 00:07:34.653 "bdev_lvol_resize", 00:07:34.653 "bdev_lvol_decouple_parent", 00:07:34.653 "bdev_lvol_inflate", 00:07:34.653 "bdev_lvol_rename", 00:07:34.653 "bdev_lvol_clone_bdev", 00:07:34.653 "bdev_lvol_clone", 00:07:34.653 "bdev_lvol_snapshot", 00:07:34.653 "bdev_lvol_create", 00:07:34.653 "bdev_lvol_delete_lvstore", 00:07:34.653 "bdev_lvol_rename_lvstore", 00:07:34.653 "bdev_lvol_create_lvstore", 00:07:34.653 "bdev_raid_set_options", 00:07:34.653 "bdev_raid_remove_base_bdev", 00:07:34.653 "bdev_raid_add_base_bdev", 00:07:34.653 "bdev_raid_delete", 00:07:34.653 "bdev_raid_create", 00:07:34.653 "bdev_raid_get_bdevs", 00:07:34.653 "bdev_error_inject_error", 00:07:34.653 "bdev_error_delete", 00:07:34.653 "bdev_error_create", 00:07:34.653 "bdev_split_delete", 00:07:34.653 "bdev_split_create", 00:07:34.653 "bdev_delay_delete", 00:07:34.653 "bdev_delay_create", 00:07:34.653 "bdev_delay_update_latency", 00:07:34.653 "bdev_zone_block_delete", 00:07:34.653 "bdev_zone_block_create", 00:07:34.653 "blobfs_create", 00:07:34.653 "blobfs_detect", 00:07:34.653 "blobfs_set_cache_size", 00:07:34.653 "bdev_xnvme_delete", 00:07:34.653 "bdev_xnvme_create", 00:07:34.653 "bdev_aio_delete", 00:07:34.653 "bdev_aio_rescan", 00:07:34.653 "bdev_aio_create", 00:07:34.653 "bdev_ftl_set_property", 00:07:34.653 "bdev_ftl_get_properties", 00:07:34.653 "bdev_ftl_get_stats", 00:07:34.653 "bdev_ftl_unmap", 00:07:34.653 "bdev_ftl_unload", 00:07:34.653 "bdev_ftl_delete", 00:07:34.653 "bdev_ftl_load", 00:07:34.653 "bdev_ftl_create", 00:07:34.653 "bdev_virtio_attach_controller", 00:07:34.653 "bdev_virtio_scsi_get_devices", 00:07:34.653 "bdev_virtio_detach_controller", 00:07:34.653 "bdev_virtio_blk_set_hotplug", 00:07:34.653 "bdev_iscsi_delete", 00:07:34.653 "bdev_iscsi_create", 00:07:34.653 "bdev_iscsi_set_options", 00:07:34.653 "accel_error_inject_error", 00:07:34.653 "ioat_scan_accel_module", 00:07:34.653 "dsa_scan_accel_module", 00:07:34.653 "iaa_scan_accel_module", 00:07:34.653 "keyring_file_remove_key", 00:07:34.653 "keyring_file_add_key", 00:07:34.653 "keyring_linux_set_options", 00:07:34.653 "fsdev_aio_delete", 00:07:34.653 "fsdev_aio_create", 00:07:34.653 "iscsi_get_histogram", 00:07:34.653 "iscsi_enable_histogram", 00:07:34.653 "iscsi_set_options", 00:07:34.653 "iscsi_get_auth_groups", 00:07:34.653 "iscsi_auth_group_remove_secret", 00:07:34.653 "iscsi_auth_group_add_secret", 00:07:34.653 "iscsi_delete_auth_group", 00:07:34.653 "iscsi_create_auth_group", 00:07:34.653 "iscsi_set_discovery_auth", 00:07:34.653 "iscsi_get_options", 00:07:34.653 "iscsi_target_node_request_logout", 00:07:34.653 "iscsi_target_node_set_redirect", 00:07:34.653 "iscsi_target_node_set_auth", 00:07:34.653 "iscsi_target_node_add_lun", 00:07:34.653 "iscsi_get_stats", 00:07:34.653 "iscsi_get_connections", 00:07:34.653 "iscsi_portal_group_set_auth", 00:07:34.653 "iscsi_start_portal_group", 00:07:34.653 "iscsi_delete_portal_group", 00:07:34.653 "iscsi_create_portal_group", 00:07:34.653 "iscsi_get_portal_groups", 00:07:34.653 "iscsi_delete_target_node", 00:07:34.653 "iscsi_target_node_remove_pg_ig_maps", 00:07:34.653 "iscsi_target_node_add_pg_ig_maps", 00:07:34.653 "iscsi_create_target_node", 00:07:34.653 "iscsi_get_target_nodes", 00:07:34.653 "iscsi_delete_initiator_group", 00:07:34.653 "iscsi_initiator_group_remove_initiators", 00:07:34.653 "iscsi_initiator_group_add_initiators", 00:07:34.653 "iscsi_create_initiator_group", 00:07:34.653 "iscsi_get_initiator_groups", 00:07:34.653 "nvmf_set_crdt", 00:07:34.653 "nvmf_set_config", 00:07:34.653 "nvmf_set_max_subsystems", 00:07:34.653 "nvmf_stop_mdns_prr", 00:07:34.653 "nvmf_publish_mdns_prr", 00:07:34.653 "nvmf_subsystem_get_listeners", 00:07:34.653 "nvmf_subsystem_get_qpairs", 00:07:34.653 "nvmf_subsystem_get_controllers", 00:07:34.653 "nvmf_get_stats", 00:07:34.653 "nvmf_get_transports", 00:07:34.653 "nvmf_create_transport", 00:07:34.653 "nvmf_get_targets", 00:07:34.653 "nvmf_delete_target", 00:07:34.653 "nvmf_create_target", 00:07:34.654 "nvmf_subsystem_allow_any_host", 00:07:34.654 "nvmf_subsystem_set_keys", 00:07:34.654 "nvmf_subsystem_remove_host", 00:07:34.654 "nvmf_subsystem_add_host", 00:07:34.654 "nvmf_ns_remove_host", 00:07:34.654 "nvmf_ns_add_host", 00:07:34.654 "nvmf_subsystem_remove_ns", 00:07:34.654 "nvmf_subsystem_set_ns_ana_group", 00:07:34.654 "nvmf_subsystem_add_ns", 00:07:34.654 "nvmf_subsystem_listener_set_ana_state", 00:07:34.654 "nvmf_discovery_get_referrals", 00:07:34.654 "nvmf_discovery_remove_referral", 00:07:34.654 "nvmf_discovery_add_referral", 00:07:34.654 "nvmf_subsystem_remove_listener", 00:07:34.654 "nvmf_subsystem_add_listener", 00:07:34.654 "nvmf_delete_subsystem", 00:07:34.654 "nvmf_create_subsystem", 00:07:34.654 "nvmf_get_subsystems", 00:07:34.654 "env_dpdk_get_mem_stats", 00:07:34.654 "nbd_get_disks", 00:07:34.654 "nbd_stop_disk", 00:07:34.654 "nbd_start_disk", 00:07:34.654 "ublk_recover_disk", 00:07:34.654 "ublk_get_disks", 00:07:34.654 "ublk_stop_disk", 00:07:34.654 "ublk_start_disk", 00:07:34.654 "ublk_destroy_target", 00:07:34.654 "ublk_create_target", 00:07:34.654 "virtio_blk_create_transport", 00:07:34.654 "virtio_blk_get_transports", 00:07:34.654 "vhost_controller_set_coalescing", 00:07:34.654 "vhost_get_controllers", 00:07:34.654 "vhost_delete_controller", 00:07:34.654 "vhost_create_blk_controller", 00:07:34.654 "vhost_scsi_controller_remove_target", 00:07:34.654 "vhost_scsi_controller_add_target", 00:07:34.654 "vhost_start_scsi_controller", 00:07:34.654 "vhost_create_scsi_controller", 00:07:34.654 "thread_set_cpumask", 00:07:34.654 "scheduler_set_options", 00:07:34.654 "framework_get_governor", 00:07:34.654 "framework_get_scheduler", 00:07:34.654 "framework_set_scheduler", 00:07:34.654 "framework_get_reactors", 00:07:34.654 "thread_get_io_channels", 00:07:34.654 "thread_get_pollers", 00:07:34.654 "thread_get_stats", 00:07:34.654 "framework_monitor_context_switch", 00:07:34.654 "spdk_kill_instance", 00:07:34.654 "log_enable_timestamps", 00:07:34.654 "log_get_flags", 00:07:34.654 "log_clear_flag", 00:07:34.654 "log_set_flag", 00:07:34.654 "log_get_level", 00:07:34.654 "log_set_level", 00:07:34.654 "log_get_print_level", 00:07:34.654 "log_set_print_level", 00:07:34.654 "framework_enable_cpumask_locks", 00:07:34.654 "framework_disable_cpumask_locks", 00:07:34.654 "framework_wait_init", 00:07:34.654 "framework_start_init", 00:07:34.654 "scsi_get_devices", 00:07:34.654 "bdev_get_histogram", 00:07:34.654 "bdev_enable_histogram", 00:07:34.654 "bdev_set_qos_limit", 00:07:34.654 "bdev_set_qd_sampling_period", 00:07:34.654 "bdev_get_bdevs", 00:07:34.654 "bdev_reset_iostat", 00:07:34.654 "bdev_get_iostat", 00:07:34.654 "bdev_examine", 00:07:34.654 "bdev_wait_for_examine", 00:07:34.654 "bdev_set_options", 00:07:34.654 "accel_get_stats", 00:07:34.654 "accel_set_options", 00:07:34.654 "accel_set_driver", 00:07:34.654 "accel_crypto_key_destroy", 00:07:34.654 "accel_crypto_keys_get", 00:07:34.654 "accel_crypto_key_create", 00:07:34.654 "accel_assign_opc", 00:07:34.654 "accel_get_module_info", 00:07:34.654 "accel_get_opc_assignments", 00:07:34.654 "vmd_rescan", 00:07:34.654 "vmd_remove_device", 00:07:34.654 "vmd_enable", 00:07:34.654 "sock_get_default_impl", 00:07:34.654 "sock_set_default_impl", 00:07:34.654 "sock_impl_set_options", 00:07:34.654 "sock_impl_get_options", 00:07:34.654 "iobuf_get_stats", 00:07:34.654 "iobuf_set_options", 00:07:34.654 "keyring_get_keys", 00:07:34.654 "framework_get_pci_devices", 00:07:34.654 "framework_get_config", 00:07:34.654 "framework_get_subsystems", 00:07:34.654 "fsdev_set_opts", 00:07:34.654 "fsdev_get_opts", 00:07:34.654 "trace_get_info", 00:07:34.654 "trace_get_tpoint_group_mask", 00:07:34.654 "trace_disable_tpoint_group", 00:07:34.654 "trace_enable_tpoint_group", 00:07:34.654 "trace_clear_tpoint_mask", 00:07:34.654 "trace_set_tpoint_mask", 00:07:34.654 "notify_get_notifications", 00:07:34.654 "notify_get_types", 00:07:34.654 "spdk_get_version", 00:07:34.654 "rpc_get_methods" 00:07:34.654 ] 00:07:34.654 11:15:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:34.654 11:15:01 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:34.654 11:15:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:34.654 11:15:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:34.654 11:15:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58843 00:07:34.654 11:15:01 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58843 ']' 00:07:34.654 11:15:01 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58843 00:07:34.654 11:15:01 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:34.654 11:15:01 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.654 11:15:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58843 00:07:34.654 11:15:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.654 11:15:01 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.654 11:15:01 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58843' 00:07:34.654 killing process with pid 58843 00:07:34.654 11:15:01 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58843 00:07:34.654 11:15:01 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58843 00:07:37.184 ************************************ 00:07:37.184 END TEST spdkcli_tcp 00:07:37.184 ************************************ 00:07:37.184 00:07:37.184 real 0m4.285s 00:07:37.184 user 0m7.621s 00:07:37.184 sys 0m0.632s 00:07:37.184 11:15:04 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.185 11:15:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.185 11:15:04 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:37.185 11:15:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.185 11:15:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.185 11:15:04 -- common/autotest_common.sh@10 -- # set +x 00:07:37.185 ************************************ 00:07:37.185 START TEST dpdk_mem_utility 00:07:37.185 ************************************ 00:07:37.185 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:37.185 * Looking for test storage... 00:07:37.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:37.185 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:37.185 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:07:37.185 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:37.443 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.443 11:15:04 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:37.443 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.443 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:37.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.443 --rc genhtml_branch_coverage=1 00:07:37.443 --rc genhtml_function_coverage=1 00:07:37.443 --rc genhtml_legend=1 00:07:37.443 --rc geninfo_all_blocks=1 00:07:37.443 --rc geninfo_unexecuted_blocks=1 00:07:37.443 00:07:37.443 ' 00:07:37.443 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:37.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.443 --rc genhtml_branch_coverage=1 00:07:37.443 --rc genhtml_function_coverage=1 00:07:37.443 --rc genhtml_legend=1 00:07:37.443 --rc geninfo_all_blocks=1 00:07:37.443 --rc geninfo_unexecuted_blocks=1 00:07:37.443 00:07:37.443 ' 00:07:37.443 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:37.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.443 --rc genhtml_branch_coverage=1 00:07:37.443 --rc genhtml_function_coverage=1 00:07:37.443 --rc genhtml_legend=1 00:07:37.443 --rc geninfo_all_blocks=1 00:07:37.443 --rc geninfo_unexecuted_blocks=1 00:07:37.443 00:07:37.443 ' 00:07:37.443 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:37.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.443 --rc genhtml_branch_coverage=1 00:07:37.443 --rc genhtml_function_coverage=1 00:07:37.443 --rc genhtml_legend=1 00:07:37.443 --rc geninfo_all_blocks=1 00:07:37.443 --rc geninfo_unexecuted_blocks=1 00:07:37.443 00:07:37.443 ' 00:07:37.443 11:15:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:37.443 11:15:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58965 00:07:37.443 11:15:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:37.443 11:15:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58965 00:07:37.443 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58965 ']' 00:07:37.443 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.443 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.443 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.443 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.443 11:15:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:37.443 [2024-12-10 11:15:04.520810] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:37.443 [2024-12-10 11:15:04.521141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58965 ] 00:07:37.702 [2024-12-10 11:15:04.700747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.961 [2024-12-10 11:15:04.818447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.905 11:15:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.905 11:15:05 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:38.905 11:15:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:38.905 11:15:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:38.905 11:15:05 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.905 11:15:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:38.905 { 00:07:38.905 "filename": "/tmp/spdk_mem_dump.txt" 00:07:38.905 } 00:07:38.905 11:15:05 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.905 11:15:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:38.905 DPDK memory size 824.000000 MiB in 1 heap(s) 00:07:38.905 1 heaps totaling size 824.000000 MiB 00:07:38.905 size: 824.000000 MiB heap id: 0 00:07:38.905 end heaps---------- 00:07:38.905 9 mempools totaling size 603.782043 MiB 00:07:38.905 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:38.905 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:38.905 size: 100.555481 MiB name: bdev_io_58965 00:07:38.905 size: 50.003479 MiB name: msgpool_58965 00:07:38.905 size: 36.509338 MiB name: fsdev_io_58965 00:07:38.905 size: 21.763794 MiB name: PDU_Pool 00:07:38.905 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:38.905 size: 4.133484 MiB name: evtpool_58965 00:07:38.905 size: 0.026123 MiB name: Session_Pool 00:07:38.905 end mempools------- 00:07:38.905 6 memzones totaling size 4.142822 MiB 00:07:38.905 size: 1.000366 MiB name: RG_ring_0_58965 00:07:38.905 size: 1.000366 MiB name: RG_ring_1_58965 00:07:38.905 size: 1.000366 MiB name: RG_ring_4_58965 00:07:38.905 size: 1.000366 MiB name: RG_ring_5_58965 00:07:38.905 size: 0.125366 MiB name: RG_ring_2_58965 00:07:38.905 size: 0.015991 MiB name: RG_ring_3_58965 00:07:38.905 end memzones------- 00:07:38.905 11:15:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:38.905 heap id: 0 total size: 824.000000 MiB number of busy elements: 323 number of free elements: 18 00:07:38.905 list of free elements. size: 16.779419 MiB 00:07:38.905 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:38.905 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:38.905 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:38.905 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:38.905 element at address: 0x200019900040 with size: 0.999939 MiB 00:07:38.905 element at address: 0x200019a00000 with size: 0.999084 MiB 00:07:38.905 element at address: 0x200032600000 with size: 0.994324 MiB 00:07:38.905 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:38.905 element at address: 0x200019200000 with size: 0.959656 MiB 00:07:38.905 element at address: 0x200019d00040 with size: 0.936401 MiB 00:07:38.905 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:38.905 element at address: 0x20001b400000 with size: 0.560730 MiB 00:07:38.905 element at address: 0x200000c00000 with size: 0.489197 MiB 00:07:38.905 element at address: 0x200019600000 with size: 0.487976 MiB 00:07:38.905 element at address: 0x200019e00000 with size: 0.485413 MiB 00:07:38.905 element at address: 0x200012c00000 with size: 0.433472 MiB 00:07:38.905 element at address: 0x200028800000 with size: 0.390442 MiB 00:07:38.905 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:38.905 list of standard malloc elements. size: 199.289673 MiB 00:07:38.905 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:38.905 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:38.905 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:38.905 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:38.905 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:07:38.905 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:38.905 element at address: 0x200019deff40 with size: 0.062683 MiB 00:07:38.905 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:38.905 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:38.905 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:07:38.905 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:38.905 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:38.905 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:38.906 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:38.906 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:38.906 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:38.906 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200019affc40 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:07:38.906 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:07:38.907 element at address: 0x200028863f40 with size: 0.000244 MiB 00:07:38.907 element at address: 0x200028864040 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886af80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886b080 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886b180 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886b280 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886b380 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886b480 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886b580 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886b680 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886b780 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886b880 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886b980 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886be80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886c080 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886c180 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886c280 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886c380 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886c480 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886c580 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886c680 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886c780 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886c880 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886c980 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886d080 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886d180 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886d280 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886d380 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886d480 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886d580 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886d680 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886d780 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886d880 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886d980 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886da80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886db80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886de80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886df80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886e080 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886e180 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886e280 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886e380 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886e480 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886e580 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886e680 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886e780 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886e880 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886e980 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886f080 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886f180 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886f280 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886f380 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886f480 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886f580 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886f680 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886f780 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886f880 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886f980 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:07:38.907 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:07:38.907 list of memzone associated elements. size: 607.930908 MiB 00:07:38.907 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:07:38.907 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:38.907 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:07:38.907 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:38.907 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:07:38.907 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58965_0 00:07:38.907 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:38.907 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58965_0 00:07:38.907 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:38.907 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58965_0 00:07:38.907 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:07:38.907 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:38.907 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:07:38.907 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:38.907 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:38.907 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58965_0 00:07:38.907 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:38.907 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58965 00:07:38.907 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:38.907 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58965 00:07:38.907 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:07:38.907 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:38.907 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:07:38.907 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:38.907 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:38.907 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:38.907 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:07:38.907 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:38.907 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:38.907 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58965 00:07:38.908 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:38.908 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58965 00:07:38.908 element at address: 0x200019affd40 with size: 1.000549 MiB 00:07:38.908 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58965 00:07:38.908 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:07:38.908 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58965 00:07:38.908 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:38.908 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58965 00:07:38.908 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:38.908 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58965 00:07:38.908 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:07:38.908 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:38.908 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:07:38.908 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:38.908 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:07:38.908 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:38.908 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:38.908 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58965 00:07:38.908 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:38.908 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58965 00:07:38.908 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:07:38.908 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:38.908 element at address: 0x200028864140 with size: 0.023804 MiB 00:07:38.908 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:38.908 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:38.908 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58965 00:07:38.908 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:07:38.908 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:38.908 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:38.908 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58965 00:07:38.908 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:38.908 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58965 00:07:38.908 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:38.908 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58965 00:07:38.908 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:07:38.908 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:38.908 11:15:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:38.908 11:15:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58965 00:07:38.908 11:15:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58965 ']' 00:07:38.908 11:15:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58965 00:07:38.908 11:15:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:38.908 11:15:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.908 11:15:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58965 00:07:38.908 11:15:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.908 11:15:05 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.908 11:15:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58965' 00:07:38.908 killing process with pid 58965 00:07:38.908 11:15:05 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58965 00:07:38.908 11:15:05 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58965 00:07:41.484 00:07:41.484 real 0m4.118s 00:07:41.484 user 0m3.977s 00:07:41.484 sys 0m0.609s 00:07:41.484 11:15:08 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.484 11:15:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:41.484 ************************************ 00:07:41.484 END TEST dpdk_mem_utility 00:07:41.484 ************************************ 00:07:41.484 11:15:08 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:41.484 11:15:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.484 11:15:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.484 11:15:08 -- common/autotest_common.sh@10 -- # set +x 00:07:41.484 ************************************ 00:07:41.484 START TEST event 00:07:41.484 ************************************ 00:07:41.484 11:15:08 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:41.484 * Looking for test storage... 00:07:41.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:41.484 11:15:08 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:41.484 11:15:08 event -- common/autotest_common.sh@1711 -- # lcov --version 00:07:41.484 11:15:08 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:41.484 11:15:08 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:41.484 11:15:08 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.484 11:15:08 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.484 11:15:08 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.484 11:15:08 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.484 11:15:08 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.484 11:15:08 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.484 11:15:08 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.484 11:15:08 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.484 11:15:08 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.484 11:15:08 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.484 11:15:08 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.484 11:15:08 event -- scripts/common.sh@344 -- # case "$op" in 00:07:41.485 11:15:08 event -- scripts/common.sh@345 -- # : 1 00:07:41.485 11:15:08 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.485 11:15:08 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.485 11:15:08 event -- scripts/common.sh@365 -- # decimal 1 00:07:41.485 11:15:08 event -- scripts/common.sh@353 -- # local d=1 00:07:41.485 11:15:08 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.485 11:15:08 event -- scripts/common.sh@355 -- # echo 1 00:07:41.485 11:15:08 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.485 11:15:08 event -- scripts/common.sh@366 -- # decimal 2 00:07:41.485 11:15:08 event -- scripts/common.sh@353 -- # local d=2 00:07:41.485 11:15:08 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.485 11:15:08 event -- scripts/common.sh@355 -- # echo 2 00:07:41.485 11:15:08 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.485 11:15:08 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.485 11:15:08 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.485 11:15:08 event -- scripts/common.sh@368 -- # return 0 00:07:41.485 11:15:08 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.485 11:15:08 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:41.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.485 --rc genhtml_branch_coverage=1 00:07:41.485 --rc genhtml_function_coverage=1 00:07:41.485 --rc genhtml_legend=1 00:07:41.485 --rc geninfo_all_blocks=1 00:07:41.485 --rc geninfo_unexecuted_blocks=1 00:07:41.485 00:07:41.485 ' 00:07:41.485 11:15:08 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:41.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.485 --rc genhtml_branch_coverage=1 00:07:41.485 --rc genhtml_function_coverage=1 00:07:41.485 --rc genhtml_legend=1 00:07:41.485 --rc geninfo_all_blocks=1 00:07:41.485 --rc geninfo_unexecuted_blocks=1 00:07:41.485 00:07:41.485 ' 00:07:41.485 11:15:08 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:41.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.485 --rc genhtml_branch_coverage=1 00:07:41.485 --rc genhtml_function_coverage=1 00:07:41.485 --rc genhtml_legend=1 00:07:41.485 --rc geninfo_all_blocks=1 00:07:41.485 --rc geninfo_unexecuted_blocks=1 00:07:41.485 00:07:41.485 ' 00:07:41.485 11:15:08 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:41.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.485 --rc genhtml_branch_coverage=1 00:07:41.485 --rc genhtml_function_coverage=1 00:07:41.485 --rc genhtml_legend=1 00:07:41.485 --rc geninfo_all_blocks=1 00:07:41.485 --rc geninfo_unexecuted_blocks=1 00:07:41.485 00:07:41.485 ' 00:07:41.485 11:15:08 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:41.485 11:15:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:41.485 11:15:08 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:41.485 11:15:08 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:41.485 11:15:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.485 11:15:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:41.485 ************************************ 00:07:41.485 START TEST event_perf 00:07:41.485 ************************************ 00:07:41.485 11:15:08 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:41.744 Running I/O for 1 seconds...[2024-12-10 11:15:08.628563] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:41.744 [2024-12-10 11:15:08.628775] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59073 ] 00:07:41.744 [2024-12-10 11:15:08.814621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.003 [2024-12-10 11:15:08.948085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.003 [2024-12-10 11:15:08.948495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.003 [2024-12-10 11:15:08.948465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.003 Running I/O for 1 seconds...[2024-12-10 11:15:08.948357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.382 00:07:43.382 lcore 0: 206166 00:07:43.382 lcore 1: 206165 00:07:43.382 lcore 2: 206166 00:07:43.382 lcore 3: 206165 00:07:43.382 done. 00:07:43.382 00:07:43.382 real 0m1.620s 00:07:43.383 user 0m4.359s 00:07:43.383 sys 0m0.134s 00:07:43.383 11:15:10 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.383 11:15:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:43.383 ************************************ 00:07:43.383 END TEST event_perf 00:07:43.383 ************************************ 00:07:43.383 11:15:10 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:43.383 11:15:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:43.383 11:15:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.383 11:15:10 event -- common/autotest_common.sh@10 -- # set +x 00:07:43.383 ************************************ 00:07:43.383 START TEST event_reactor 00:07:43.383 ************************************ 00:07:43.383 11:15:10 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:43.383 [2024-12-10 11:15:10.324865] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:43.383 [2024-12-10 11:15:10.325198] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59112 ] 00:07:43.642 [2024-12-10 11:15:10.531541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.642 [2024-12-10 11:15:10.648762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.016 test_start 00:07:45.016 oneshot 00:07:45.016 tick 100 00:07:45.016 tick 100 00:07:45.016 tick 250 00:07:45.016 tick 100 00:07:45.016 tick 100 00:07:45.016 tick 100 00:07:45.016 tick 250 00:07:45.016 tick 500 00:07:45.016 tick 100 00:07:45.016 tick 100 00:07:45.016 tick 250 00:07:45.016 tick 100 00:07:45.016 tick 100 00:07:45.016 test_end 00:07:45.016 00:07:45.016 real 0m1.602s 00:07:45.016 user 0m1.389s 00:07:45.016 sys 0m0.103s 00:07:45.016 11:15:11 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.016 11:15:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:45.016 ************************************ 00:07:45.016 END TEST event_reactor 00:07:45.016 ************************************ 00:07:45.016 11:15:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:45.016 11:15:11 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:45.016 11:15:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.016 11:15:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:45.016 ************************************ 00:07:45.016 START TEST event_reactor_perf 00:07:45.016 ************************************ 00:07:45.016 11:15:11 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:45.016 [2024-12-10 11:15:11.968660] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:45.016 [2024-12-10 11:15:11.968835] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59149 ] 00:07:45.275 [2024-12-10 11:15:12.162372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.275 [2024-12-10 11:15:12.290239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.655 test_start 00:07:46.656 test_end 00:07:46.656 Performance: 366345 events per second 00:07:46.656 00:07:46.656 real 0m1.605s 00:07:46.656 user 0m1.380s 00:07:46.656 sys 0m0.115s 00:07:46.656 11:15:13 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.656 ************************************ 00:07:46.656 END TEST event_reactor_perf 00:07:46.656 ************************************ 00:07:46.656 11:15:13 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:46.656 11:15:13 event -- event/event.sh@49 -- # uname -s 00:07:46.656 11:15:13 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:46.656 11:15:13 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:46.656 11:15:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.656 11:15:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.656 11:15:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:46.656 ************************************ 00:07:46.656 START TEST event_scheduler 00:07:46.656 ************************************ 00:07:46.656 11:15:13 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:46.656 * Looking for test storage... 00:07:46.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:46.656 11:15:13 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.656 11:15:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.656 11:15:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.915 11:15:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.915 11:15:13 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:46.915 11:15:13 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.915 11:15:13 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.915 --rc genhtml_branch_coverage=1 00:07:46.915 --rc genhtml_function_coverage=1 00:07:46.915 --rc genhtml_legend=1 00:07:46.915 --rc geninfo_all_blocks=1 00:07:46.915 --rc geninfo_unexecuted_blocks=1 00:07:46.915 00:07:46.915 ' 00:07:46.915 11:15:13 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.915 --rc genhtml_branch_coverage=1 00:07:46.915 --rc genhtml_function_coverage=1 00:07:46.915 --rc genhtml_legend=1 00:07:46.915 --rc geninfo_all_blocks=1 00:07:46.915 --rc geninfo_unexecuted_blocks=1 00:07:46.915 00:07:46.915 ' 00:07:46.915 11:15:13 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.915 --rc genhtml_branch_coverage=1 00:07:46.915 --rc genhtml_function_coverage=1 00:07:46.915 --rc genhtml_legend=1 00:07:46.915 --rc geninfo_all_blocks=1 00:07:46.915 --rc geninfo_unexecuted_blocks=1 00:07:46.915 00:07:46.915 ' 00:07:46.915 11:15:13 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.915 --rc genhtml_branch_coverage=1 00:07:46.915 --rc genhtml_function_coverage=1 00:07:46.915 --rc genhtml_legend=1 00:07:46.915 --rc geninfo_all_blocks=1 00:07:46.915 --rc geninfo_unexecuted_blocks=1 00:07:46.915 00:07:46.916 ' 00:07:46.916 11:15:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:46.916 11:15:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59225 00:07:46.916 11:15:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:46.916 11:15:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:46.916 11:15:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59225 00:07:46.916 11:15:13 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59225 ']' 00:07:46.916 11:15:13 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.916 11:15:13 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.916 11:15:13 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.916 11:15:13 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.916 11:15:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:46.916 [2024-12-10 11:15:13.937191] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:46.916 [2024-12-10 11:15:13.937512] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59225 ] 00:07:47.175 [2024-12-10 11:15:14.118494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.175 [2024-12-10 11:15:14.244582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.175 [2024-12-10 11:15:14.244761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.175 [2024-12-10 11:15:14.244861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.175 [2024-12-10 11:15:14.244901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.743 11:15:14 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.743 11:15:14 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:47.743 11:15:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:47.743 11:15:14 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.743 11:15:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:47.743 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:47.743 POWER: Cannot set governor of lcore 0 to userspace 00:07:47.743 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:47.743 POWER: Cannot set governor of lcore 0 to performance 00:07:47.743 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:47.743 POWER: Cannot set governor of lcore 0 to userspace 00:07:47.743 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:47.743 POWER: Cannot set governor of lcore 0 to userspace 00:07:47.743 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:47.743 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:47.743 POWER: Unable to set Power Management Environment for lcore 0 00:07:47.743 [2024-12-10 11:15:14.777970] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:07:47.743 [2024-12-10 11:15:14.777997] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:07:47.743 [2024-12-10 11:15:14.778010] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:47.743 [2024-12-10 11:15:14.778032] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:47.743 [2024-12-10 11:15:14.778043] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:47.743 [2024-12-10 11:15:14.778056] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:47.743 11:15:14 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.743 11:15:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:47.743 11:15:14 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.743 11:15:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:48.002 [2024-12-10 11:15:15.111288] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:48.002 11:15:15 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.002 11:15:15 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:48.002 11:15:15 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.002 11:15:15 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.002 11:15:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:48.261 ************************************ 00:07:48.261 START TEST scheduler_create_thread 00:07:48.261 ************************************ 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.261 2 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.261 3 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.261 4 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.261 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.261 5 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.262 6 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.262 7 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.262 8 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.262 9 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.262 10 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.262 11:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:49.640 11:15:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.640 11:15:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:49.640 11:15:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:49.640 11:15:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.640 11:15:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.576 11:15:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.576 11:15:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:50.576 11:15:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.576 11:15:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.515 11:15:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.515 11:15:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:51.515 11:15:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:51.515 11:15:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.515 11:15:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:52.092 ************************************ 00:07:52.092 END TEST scheduler_create_thread 00:07:52.092 ************************************ 00:07:52.092 11:15:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.092 00:07:52.092 real 0m3.889s 00:07:52.092 user 0m0.029s 00:07:52.092 sys 0m0.009s 00:07:52.092 11:15:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.092 11:15:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:52.092 11:15:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:52.092 11:15:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59225 00:07:52.092 11:15:19 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59225 ']' 00:07:52.092 11:15:19 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59225 00:07:52.092 11:15:19 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:52.092 11:15:19 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.092 11:15:19 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59225 00:07:52.092 killing process with pid 59225 00:07:52.092 11:15:19 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:52.092 11:15:19 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:52.092 11:15:19 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59225' 00:07:52.092 11:15:19 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59225 00:07:52.092 11:15:19 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59225 00:07:52.351 [2024-12-10 11:15:19.394863] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:53.729 00:07:53.729 real 0m6.962s 00:07:53.729 user 0m14.245s 00:07:53.729 sys 0m0.606s 00:07:53.729 ************************************ 00:07:53.729 END TEST event_scheduler 00:07:53.729 ************************************ 00:07:53.729 11:15:20 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.729 11:15:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:53.729 11:15:20 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:53.729 11:15:20 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:53.729 11:15:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.729 11:15:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.729 11:15:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:53.729 ************************************ 00:07:53.729 START TEST app_repeat 00:07:53.729 ************************************ 00:07:53.729 11:15:20 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:53.729 11:15:20 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.729 11:15:20 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.729 11:15:20 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:53.729 11:15:20 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:53.729 11:15:20 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:53.729 11:15:20 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:53.729 11:15:20 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:53.729 11:15:20 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59353 00:07:53.729 11:15:20 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:53.729 11:15:20 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:53.729 Process app_repeat pid: 59353 00:07:53.729 11:15:20 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59353' 00:07:53.729 spdk_app_start Round 0 00:07:53.729 11:15:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:53.729 11:15:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:53.729 11:15:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59353 /var/tmp/spdk-nbd.sock 00:07:53.729 11:15:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59353 ']' 00:07:53.729 11:15:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:53.729 11:15:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:53.729 11:15:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:53.729 11:15:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.729 11:15:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:53.729 [2024-12-10 11:15:20.723580] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:53.729 [2024-12-10 11:15:20.723711] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59353 ] 00:07:53.988 [2024-12-10 11:15:20.889877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:53.988 [2024-12-10 11:15:21.031015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.988 [2024-12-10 11:15:21.031047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.557 11:15:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.557 11:15:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:54.557 11:15:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:54.815 Malloc0 00:07:54.815 11:15:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:55.074 Malloc1 00:07:55.333 11:15:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:55.333 11:15:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.333 11:15:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:55.333 11:15:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:55.333 11:15:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:55.333 11:15:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:55.333 11:15:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:55.333 11:15:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.333 11:15:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:55.333 11:15:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:55.333 11:15:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:55.333 11:15:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:55.333 11:15:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:55.333 11:15:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:55.333 11:15:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:55.333 11:15:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:55.333 /dev/nbd0 00:07:55.592 11:15:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:55.592 11:15:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:55.592 11:15:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:55.592 11:15:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:55.592 11:15:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:55.592 11:15:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:55.592 11:15:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:55.592 11:15:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:55.592 11:15:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:55.592 11:15:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:55.592 11:15:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:55.592 1+0 records in 00:07:55.592 1+0 records out 00:07:55.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391491 s, 10.5 MB/s 00:07:55.592 11:15:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:55.592 11:15:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:55.592 11:15:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:55.592 11:15:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:55.592 11:15:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:55.592 11:15:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:55.592 11:15:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:55.592 11:15:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:55.851 /dev/nbd1 00:07:55.851 11:15:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:55.851 11:15:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:55.851 11:15:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:55.851 11:15:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:55.851 11:15:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:55.851 11:15:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:55.851 11:15:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:55.851 11:15:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:55.851 11:15:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:55.851 11:15:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:55.851 11:15:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:55.851 1+0 records in 00:07:55.851 1+0 records out 00:07:55.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041702 s, 9.8 MB/s 00:07:55.851 11:15:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:55.851 11:15:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:55.851 11:15:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:55.851 11:15:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:55.851 11:15:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:55.851 11:15:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:55.851 11:15:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:55.851 11:15:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:55.851 11:15:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.851 11:15:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:56.109 11:15:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:56.109 { 00:07:56.109 "nbd_device": "/dev/nbd0", 00:07:56.109 "bdev_name": "Malloc0" 00:07:56.109 }, 00:07:56.109 { 00:07:56.109 "nbd_device": "/dev/nbd1", 00:07:56.109 "bdev_name": "Malloc1" 00:07:56.109 } 00:07:56.109 ]' 00:07:56.109 11:15:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:56.109 { 00:07:56.109 "nbd_device": "/dev/nbd0", 00:07:56.109 "bdev_name": "Malloc0" 00:07:56.109 }, 00:07:56.109 { 00:07:56.109 "nbd_device": "/dev/nbd1", 00:07:56.109 "bdev_name": "Malloc1" 00:07:56.109 } 00:07:56.109 ]' 00:07:56.109 11:15:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:56.109 11:15:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:56.109 /dev/nbd1' 00:07:56.109 11:15:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:56.109 /dev/nbd1' 00:07:56.109 11:15:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:56.109 11:15:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:56.109 11:15:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:56.109 11:15:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:56.109 11:15:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:56.109 11:15:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:56.110 256+0 records in 00:07:56.110 256+0 records out 00:07:56.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127494 s, 82.2 MB/s 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:56.110 256+0 records in 00:07:56.110 256+0 records out 00:07:56.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303313 s, 34.6 MB/s 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:56.110 256+0 records in 00:07:56.110 256+0 records out 00:07:56.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0318469 s, 32.9 MB/s 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.110 11:15:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:56.369 11:15:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:56.369 11:15:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:56.369 11:15:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:56.369 11:15:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.369 11:15:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.369 11:15:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:56.369 11:15:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:56.369 11:15:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.369 11:15:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.369 11:15:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:56.627 11:15:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:56.627 11:15:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:56.627 11:15:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:56.627 11:15:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.627 11:15:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.627 11:15:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:56.627 11:15:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:56.627 11:15:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.627 11:15:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:56.627 11:15:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.627 11:15:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:56.886 11:15:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:56.886 11:15:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:56.886 11:15:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:56.886 11:15:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:56.886 11:15:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:56.886 11:15:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:56.886 11:15:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:56.886 11:15:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:56.886 11:15:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:56.886 11:15:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:56.886 11:15:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:56.886 11:15:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:56.886 11:15:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:57.454 11:15:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:58.830 [2024-12-10 11:15:25.514119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:58.830 [2024-12-10 11:15:25.630948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.830 [2024-12-10 11:15:25.630966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.830 [2024-12-10 11:15:25.831192] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:58.830 [2024-12-10 11:15:25.831273] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:00.732 spdk_app_start Round 1 00:08:00.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:00.732 11:15:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:00.732 11:15:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:00.732 11:15:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59353 /var/tmp/spdk-nbd.sock 00:08:00.732 11:15:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59353 ']' 00:08:00.732 11:15:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:00.732 11:15:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.732 11:15:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:00.732 11:15:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.732 11:15:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:00.732 11:15:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.732 11:15:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:00.732 11:15:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:00.732 Malloc0 00:08:00.732 11:15:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:01.300 Malloc1 00:08:01.300 11:15:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:01.300 11:15:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.300 11:15:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:01.300 11:15:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:01.300 11:15:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.300 11:15:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:01.300 11:15:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:01.300 11:15:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.300 11:15:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:01.300 11:15:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:01.300 11:15:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.300 11:15:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:01.300 11:15:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:01.300 11:15:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:01.300 11:15:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:01.300 11:15:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:01.559 /dev/nbd0 00:08:01.559 11:15:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:01.559 11:15:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:01.559 11:15:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:01.559 11:15:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:01.559 11:15:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.559 11:15:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.559 11:15:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:01.559 11:15:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:01.559 11:15:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.559 11:15:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.559 11:15:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:01.559 1+0 records in 00:08:01.559 1+0 records out 00:08:01.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265404 s, 15.4 MB/s 00:08:01.559 11:15:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:01.559 11:15:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:01.559 11:15:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:01.559 11:15:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:01.559 11:15:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:01.559 11:15:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.559 11:15:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:01.559 11:15:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:01.817 /dev/nbd1 00:08:01.817 11:15:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:01.817 11:15:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:01.817 11:15:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:01.817 11:15:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:01.817 11:15:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.817 11:15:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.817 11:15:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:01.817 11:15:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:01.817 11:15:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.817 11:15:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.817 11:15:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:01.817 1+0 records in 00:08:01.817 1+0 records out 00:08:01.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333842 s, 12.3 MB/s 00:08:01.817 11:15:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:01.817 11:15:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:01.817 11:15:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:01.817 11:15:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:01.817 11:15:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:01.817 11:15:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.817 11:15:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:01.817 11:15:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:01.817 11:15:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.817 11:15:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.076 11:15:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:02.076 { 00:08:02.077 "nbd_device": "/dev/nbd0", 00:08:02.077 "bdev_name": "Malloc0" 00:08:02.077 }, 00:08:02.077 { 00:08:02.077 "nbd_device": "/dev/nbd1", 00:08:02.077 "bdev_name": "Malloc1" 00:08:02.077 } 00:08:02.077 ]' 00:08:02.077 11:15:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.077 11:15:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:02.077 { 00:08:02.077 "nbd_device": "/dev/nbd0", 00:08:02.077 "bdev_name": "Malloc0" 00:08:02.077 }, 00:08:02.077 { 00:08:02.077 "nbd_device": "/dev/nbd1", 00:08:02.077 "bdev_name": "Malloc1" 00:08:02.077 } 00:08:02.077 ]' 00:08:02.077 11:15:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:02.077 /dev/nbd1' 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:02.077 /dev/nbd1' 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:02.077 256+0 records in 00:08:02.077 256+0 records out 00:08:02.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123798 s, 84.7 MB/s 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:02.077 256+0 records in 00:08:02.077 256+0 records out 00:08:02.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293749 s, 35.7 MB/s 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:02.077 256+0 records in 00:08:02.077 256+0 records out 00:08:02.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032881 s, 31.9 MB/s 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.077 11:15:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:02.336 11:15:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:02.336 11:15:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:02.336 11:15:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:02.336 11:15:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.336 11:15:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.336 11:15:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:02.336 11:15:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:02.336 11:15:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.336 11:15:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.336 11:15:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:02.595 11:15:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:02.595 11:15:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:02.595 11:15:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:02.595 11:15:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.595 11:15:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.595 11:15:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:02.595 11:15:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:02.595 11:15:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.595 11:15:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:02.595 11:15:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.595 11:15:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.853 11:15:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:02.853 11:15:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:02.853 11:15:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.853 11:15:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:02.853 11:15:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:02.853 11:15:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:02.853 11:15:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:02.853 11:15:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:02.853 11:15:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:02.853 11:15:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:02.853 11:15:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:02.853 11:15:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:02.853 11:15:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:03.420 11:15:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:04.794 [2024-12-10 11:15:31.559718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:04.794 [2024-12-10 11:15:31.675775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.794 [2024-12-10 11:15:31.675808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.794 [2024-12-10 11:15:31.873713] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:04.794 [2024-12-10 11:15:31.873829] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:06.702 spdk_app_start Round 2 00:08:06.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:06.702 11:15:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:06.702 11:15:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:06.702 11:15:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59353 /var/tmp/spdk-nbd.sock 00:08:06.702 11:15:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59353 ']' 00:08:06.702 11:15:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:06.702 11:15:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.702 11:15:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:06.702 11:15:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.702 11:15:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:06.702 11:15:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.702 11:15:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:06.702 11:15:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:06.961 Malloc0 00:08:06.961 11:15:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:07.221 Malloc1 00:08:07.221 11:15:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:07.221 11:15:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.221 11:15:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:07.221 11:15:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:07.221 11:15:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:07.221 11:15:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:07.221 11:15:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:07.221 11:15:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.221 11:15:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:07.221 11:15:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:07.221 11:15:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:07.221 11:15:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:07.221 11:15:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:07.221 11:15:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:07.221 11:15:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:07.221 11:15:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:07.480 /dev/nbd0 00:08:07.480 11:15:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:07.480 11:15:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:07.480 11:15:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:07.480 11:15:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:07.480 11:15:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.480 11:15:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.480 11:15:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:07.480 11:15:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:07.480 11:15:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.480 11:15:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.480 11:15:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:07.480 1+0 records in 00:08:07.480 1+0 records out 00:08:07.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334696 s, 12.2 MB/s 00:08:07.480 11:15:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:07.480 11:15:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:07.480 11:15:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:07.480 11:15:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.480 11:15:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:07.480 11:15:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.480 11:15:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:07.480 11:15:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:07.739 /dev/nbd1 00:08:07.739 11:15:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:07.739 11:15:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:07.739 11:15:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:07.739 11:15:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:07.739 11:15:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.739 11:15:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.739 11:15:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:07.739 11:15:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:07.739 11:15:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.739 11:15:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.739 11:15:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:07.739 1+0 records in 00:08:07.739 1+0 records out 00:08:07.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463822 s, 8.8 MB/s 00:08:07.739 11:15:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:07.740 11:15:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:07.740 11:15:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:07.740 11:15:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.740 11:15:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:07.740 11:15:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.740 11:15:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:07.740 11:15:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:07.740 11:15:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.740 11:15:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:08.000 { 00:08:08.000 "nbd_device": "/dev/nbd0", 00:08:08.000 "bdev_name": "Malloc0" 00:08:08.000 }, 00:08:08.000 { 00:08:08.000 "nbd_device": "/dev/nbd1", 00:08:08.000 "bdev_name": "Malloc1" 00:08:08.000 } 00:08:08.000 ]' 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:08.000 { 00:08:08.000 "nbd_device": "/dev/nbd0", 00:08:08.000 "bdev_name": "Malloc0" 00:08:08.000 }, 00:08:08.000 { 00:08:08.000 "nbd_device": "/dev/nbd1", 00:08:08.000 "bdev_name": "Malloc1" 00:08:08.000 } 00:08:08.000 ]' 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:08.000 /dev/nbd1' 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:08.000 /dev/nbd1' 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:08.000 256+0 records in 00:08:08.000 256+0 records out 00:08:08.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524863 s, 200 MB/s 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.000 11:15:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:08.000 256+0 records in 00:08:08.000 256+0 records out 00:08:08.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296611 s, 35.4 MB/s 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:08.000 256+0 records in 00:08:08.000 256+0 records out 00:08:08.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311241 s, 33.7 MB/s 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.000 11:15:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:08.259 11:15:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:08.259 11:15:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:08.259 11:15:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:08.259 11:15:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.259 11:15:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.259 11:15:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:08.259 11:15:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:08.259 11:15:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.259 11:15:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.259 11:15:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:08.517 11:15:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:08.517 11:15:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:08.517 11:15:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:08.517 11:15:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.517 11:15:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.517 11:15:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:08.517 11:15:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:08.517 11:15:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.517 11:15:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.517 11:15:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.517 11:15:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.775 11:15:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:08.775 11:15:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:08.775 11:15:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.775 11:15:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:08.775 11:15:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:08.775 11:15:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.775 11:15:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:08.775 11:15:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:08.775 11:15:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:08.775 11:15:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:08.775 11:15:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:08.775 11:15:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:08.775 11:15:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:09.343 11:15:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:10.722 [2024-12-10 11:15:37.494555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.722 [2024-12-10 11:15:37.620709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.722 [2024-12-10 11:15:37.620713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.722 [2024-12-10 11:15:37.824662] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:10.722 [2024-12-10 11:15:37.824756] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:12.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:12.627 11:15:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59353 /var/tmp/spdk-nbd.sock 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59353 ']' 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:12.627 11:15:39 event.app_repeat -- event/event.sh@39 -- # killprocess 59353 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59353 ']' 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59353 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59353 00:08:12.627 killing process with pid 59353 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59353' 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59353 00:08:12.627 11:15:39 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59353 00:08:13.560 spdk_app_start is called in Round 0. 00:08:13.560 Shutdown signal received, stop current app iteration 00:08:13.560 Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 reinitialization... 00:08:13.560 spdk_app_start is called in Round 1. 00:08:13.560 Shutdown signal received, stop current app iteration 00:08:13.560 Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 reinitialization... 00:08:13.560 spdk_app_start is called in Round 2. 00:08:13.560 Shutdown signal received, stop current app iteration 00:08:13.560 Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 reinitialization... 00:08:13.560 spdk_app_start is called in Round 3. 00:08:13.560 Shutdown signal received, stop current app iteration 00:08:13.560 11:15:40 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:13.560 11:15:40 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:13.560 00:08:13.560 real 0m20.017s 00:08:13.560 user 0m42.803s 00:08:13.560 sys 0m3.247s 00:08:13.560 11:15:40 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.560 11:15:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:13.819 ************************************ 00:08:13.819 END TEST app_repeat 00:08:13.819 ************************************ 00:08:13.819 11:15:40 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:13.819 11:15:40 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:13.819 11:15:40 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.819 11:15:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.819 11:15:40 event -- common/autotest_common.sh@10 -- # set +x 00:08:13.819 ************************************ 00:08:13.819 START TEST cpu_locks 00:08:13.819 ************************************ 00:08:13.819 11:15:40 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:13.819 * Looking for test storage... 00:08:13.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:13.819 11:15:40 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:13.819 11:15:40 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:08:13.819 11:15:40 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:14.087 11:15:40 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.087 11:15:40 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:14.087 11:15:40 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.087 11:15:40 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:14.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.087 --rc genhtml_branch_coverage=1 00:08:14.087 --rc genhtml_function_coverage=1 00:08:14.087 --rc genhtml_legend=1 00:08:14.087 --rc geninfo_all_blocks=1 00:08:14.087 --rc geninfo_unexecuted_blocks=1 00:08:14.087 00:08:14.087 ' 00:08:14.087 11:15:40 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:14.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.087 --rc genhtml_branch_coverage=1 00:08:14.087 --rc genhtml_function_coverage=1 00:08:14.087 --rc genhtml_legend=1 00:08:14.087 --rc geninfo_all_blocks=1 00:08:14.087 --rc geninfo_unexecuted_blocks=1 00:08:14.087 00:08:14.087 ' 00:08:14.087 11:15:40 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:14.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.087 --rc genhtml_branch_coverage=1 00:08:14.087 --rc genhtml_function_coverage=1 00:08:14.087 --rc genhtml_legend=1 00:08:14.087 --rc geninfo_all_blocks=1 00:08:14.087 --rc geninfo_unexecuted_blocks=1 00:08:14.087 00:08:14.087 ' 00:08:14.087 11:15:40 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:14.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.087 --rc genhtml_branch_coverage=1 00:08:14.087 --rc genhtml_function_coverage=1 00:08:14.087 --rc genhtml_legend=1 00:08:14.087 --rc geninfo_all_blocks=1 00:08:14.087 --rc geninfo_unexecuted_blocks=1 00:08:14.087 00:08:14.087 ' 00:08:14.087 11:15:40 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:14.087 11:15:40 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:14.087 11:15:40 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:14.087 11:15:40 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:14.087 11:15:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.087 11:15:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.087 11:15:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:14.087 ************************************ 00:08:14.087 START TEST default_locks 00:08:14.087 ************************************ 00:08:14.087 11:15:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:14.087 11:15:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:14.087 11:15:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59802 00:08:14.087 11:15:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59802 00:08:14.088 11:15:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59802 ']' 00:08:14.088 11:15:40 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.088 11:15:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.088 11:15:40 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.088 11:15:40 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.088 11:15:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:14.088 [2024-12-10 11:15:41.089041] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:14.088 [2024-12-10 11:15:41.089185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59802 ] 00:08:14.348 [2024-12-10 11:15:41.271398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.348 [2024-12-10 11:15:41.389052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.727 11:15:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.727 11:15:42 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:15.727 11:15:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59802 00:08:15.727 11:15:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59802 00:08:15.727 11:15:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:15.727 11:15:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59802 00:08:15.727 11:15:42 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59802 ']' 00:08:15.727 11:15:42 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59802 00:08:15.727 11:15:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:15.727 11:15:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.727 11:15:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59802 00:08:15.986 11:15:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.986 11:15:42 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.986 killing process with pid 59802 00:08:15.986 11:15:42 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59802' 00:08:15.986 11:15:42 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59802 00:08:15.986 11:15:42 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59802 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59802 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59802 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59802 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59802 ']' 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:18.567 ERROR: process (pid: 59802) is no longer running 00:08:18.567 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59802) - No such process 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:18.567 00:08:18.567 real 0m4.347s 00:08:18.567 user 0m4.216s 00:08:18.567 sys 0m0.770s 00:08:18.567 ************************************ 00:08:18.567 END TEST default_locks 00:08:18.567 ************************************ 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.567 11:15:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:18.567 11:15:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:18.567 11:15:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.567 11:15:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.567 11:15:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:18.567 ************************************ 00:08:18.567 START TEST default_locks_via_rpc 00:08:18.567 ************************************ 00:08:18.567 11:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:18.567 11:15:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59885 00:08:18.567 11:15:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:18.567 11:15:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59885 00:08:18.567 11:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59885 ']' 00:08:18.568 11:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.568 11:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.568 11:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.568 11:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.568 11:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.568 [2024-12-10 11:15:45.499685] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:18.568 [2024-12-10 11:15:45.499816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59885 ] 00:08:18.827 [2024-12-10 11:15:45.679265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.827 [2024-12-10 11:15:45.800298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59885 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59885 00:08:19.761 11:15:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:20.326 11:15:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59885 00:08:20.326 11:15:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59885 ']' 00:08:20.326 11:15:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59885 00:08:20.326 11:15:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:20.326 11:15:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.326 11:15:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59885 00:08:20.326 11:15:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.326 11:15:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.326 killing process with pid 59885 00:08:20.326 11:15:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59885' 00:08:20.326 11:15:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59885 00:08:20.326 11:15:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59885 00:08:22.858 00:08:22.858 real 0m4.325s 00:08:22.858 user 0m4.286s 00:08:22.858 sys 0m0.699s 00:08:22.858 11:15:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.858 11:15:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.858 ************************************ 00:08:22.858 END TEST default_locks_via_rpc 00:08:22.858 ************************************ 00:08:22.858 11:15:49 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:22.858 11:15:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.858 11:15:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.858 11:15:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:22.858 ************************************ 00:08:22.858 START TEST non_locking_app_on_locked_coremask 00:08:22.858 ************************************ 00:08:22.858 11:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:22.858 11:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59960 00:08:22.858 11:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:22.858 11:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59960 /var/tmp/spdk.sock 00:08:22.858 11:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59960 ']' 00:08:22.858 11:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.858 11:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.858 11:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.858 11:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.858 11:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:22.859 [2024-12-10 11:15:49.902896] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:22.859 [2024-12-10 11:15:49.903246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59960 ] 00:08:23.117 [2024-12-10 11:15:50.083432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.117 [2024-12-10 11:15:50.210556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.051 11:15:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.051 11:15:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:24.052 11:15:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59976 00:08:24.052 11:15:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59976 /var/tmp/spdk2.sock 00:08:24.052 11:15:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:24.052 11:15:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59976 ']' 00:08:24.052 11:15:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:24.052 11:15:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.052 11:15:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:24.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:24.052 11:15:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.052 11:15:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:24.310 [2024-12-10 11:15:51.208944] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:24.310 [2024-12-10 11:15:51.209268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59976 ] 00:08:24.310 [2024-12-10 11:15:51.391587] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:24.310 [2024-12-10 11:15:51.391649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.569 [2024-12-10 11:15:51.625455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.102 11:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.102 11:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:27.102 11:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59960 00:08:27.102 11:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59960 00:08:27.102 11:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:28.034 11:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59960 00:08:28.034 11:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59960 ']' 00:08:28.034 11:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59960 00:08:28.034 11:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:28.034 11:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.034 11:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59960 00:08:28.034 killing process with pid 59960 00:08:28.034 11:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.034 11:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.034 11:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59960' 00:08:28.034 11:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59960 00:08:28.034 11:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59960 00:08:33.319 11:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59976 00:08:33.319 11:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59976 ']' 00:08:33.319 11:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59976 00:08:33.319 11:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:33.319 11:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.319 11:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59976 00:08:33.319 killing process with pid 59976 00:08:33.319 11:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.319 11:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.319 11:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59976' 00:08:33.319 11:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59976 00:08:33.319 11:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59976 00:08:35.222 ************************************ 00:08:35.222 END TEST non_locking_app_on_locked_coremask 00:08:35.222 ************************************ 00:08:35.222 00:08:35.222 real 0m12.312s 00:08:35.222 user 0m12.750s 00:08:35.222 sys 0m1.449s 00:08:35.223 11:16:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.223 11:16:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:35.223 11:16:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:35.223 11:16:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.223 11:16:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.223 11:16:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:35.223 ************************************ 00:08:35.223 START TEST locking_app_on_unlocked_coremask 00:08:35.223 ************************************ 00:08:35.223 11:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:35.223 11:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60137 00:08:35.223 11:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60137 /var/tmp/spdk.sock 00:08:35.223 11:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:35.223 11:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60137 ']' 00:08:35.223 11:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.223 11:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.223 11:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.223 11:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.223 11:16:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:35.223 [2024-12-10 11:16:02.282337] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:35.223 [2024-12-10 11:16:02.282472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60137 ] 00:08:35.481 [2024-12-10 11:16:02.466412] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:35.481 [2024-12-10 11:16:02.466477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.481 [2024-12-10 11:16:02.583058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.468 11:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.468 11:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:36.468 11:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60153 00:08:36.468 11:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:36.468 11:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60153 /var/tmp/spdk2.sock 00:08:36.468 11:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60153 ']' 00:08:36.468 11:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:36.468 11:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.468 11:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:36.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:36.468 11:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.468 11:16:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:36.468 [2024-12-10 11:16:03.579108] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:36.468 [2024-12-10 11:16:03.579478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60153 ] 00:08:36.727 [2024-12-10 11:16:03.762868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.986 [2024-12-10 11:16:04.002266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.520 11:16:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.520 11:16:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:39.520 11:16:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60153 00:08:39.520 11:16:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60153 00:08:39.520 11:16:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:40.088 11:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60137 00:08:40.088 11:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60137 ']' 00:08:40.088 11:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60137 00:08:40.088 11:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:40.088 11:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.088 11:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60137 00:08:40.088 killing process with pid 60137 00:08:40.088 11:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.088 11:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.088 11:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60137' 00:08:40.088 11:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60137 00:08:40.088 11:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60137 00:08:45.446 11:16:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60153 00:08:45.446 11:16:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60153 ']' 00:08:45.446 11:16:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60153 00:08:45.446 11:16:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:45.446 11:16:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.446 11:16:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60153 00:08:45.446 killing process with pid 60153 00:08:45.446 11:16:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.446 11:16:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.446 11:16:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60153' 00:08:45.446 11:16:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60153 00:08:45.446 11:16:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60153 00:08:47.348 ************************************ 00:08:47.348 END TEST locking_app_on_unlocked_coremask 00:08:47.348 ************************************ 00:08:47.348 00:08:47.348 real 0m12.195s 00:08:47.348 user 0m12.537s 00:08:47.348 sys 0m1.462s 00:08:47.348 11:16:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.348 11:16:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:47.348 11:16:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:47.348 11:16:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.348 11:16:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.348 11:16:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:47.348 ************************************ 00:08:47.348 START TEST locking_app_on_locked_coremask 00:08:47.348 ************************************ 00:08:47.348 11:16:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:47.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.348 11:16:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60307 00:08:47.348 11:16:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60307 /var/tmp/spdk.sock 00:08:47.348 11:16:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60307 ']' 00:08:47.348 11:16:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.348 11:16:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:47.348 11:16:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.348 11:16:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.348 11:16:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.348 11:16:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:47.606 [2024-12-10 11:16:14.541100] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:47.606 [2024-12-10 11:16:14.541385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60307 ] 00:08:47.865 [2024-12-10 11:16:14.723712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.865 [2024-12-10 11:16:14.838769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60328 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60328 /var/tmp/spdk2.sock 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60328 /var/tmp/spdk2.sock 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:48.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60328 /var/tmp/spdk2.sock 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60328 ']' 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.801 11:16:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:48.801 [2024-12-10 11:16:15.845731] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:48.801 [2024-12-10 11:16:15.846091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60328 ] 00:08:49.060 [2024-12-10 11:16:16.026750] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60307 has claimed it. 00:08:49.060 [2024-12-10 11:16:16.026835] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:49.628 ERROR: process (pid: 60328) is no longer running 00:08:49.628 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60328) - No such process 00:08:49.628 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.628 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:49.628 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:49.628 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:49.628 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:49.628 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:49.628 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60307 00:08:49.628 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60307 00:08:49.628 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:49.887 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60307 00:08:49.887 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60307 ']' 00:08:49.887 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60307 00:08:49.887 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:49.887 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.887 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60307 00:08:49.887 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.887 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.887 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60307' 00:08:49.887 killing process with pid 60307 00:08:49.887 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60307 00:08:49.888 11:16:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60307 00:08:52.525 00:08:52.525 real 0m4.911s 00:08:52.525 user 0m5.063s 00:08:52.525 sys 0m0.879s 00:08:52.525 11:16:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.525 ************************************ 00:08:52.525 END TEST locking_app_on_locked_coremask 00:08:52.525 ************************************ 00:08:52.525 11:16:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:52.525 11:16:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:52.525 11:16:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.525 11:16:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.525 11:16:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:52.525 ************************************ 00:08:52.525 START TEST locking_overlapped_coremask 00:08:52.525 ************************************ 00:08:52.525 11:16:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:52.525 11:16:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60398 00:08:52.525 11:16:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:52.525 11:16:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60398 /var/tmp/spdk.sock 00:08:52.525 11:16:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60398 ']' 00:08:52.525 11:16:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.525 11:16:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.525 11:16:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.525 11:16:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.525 11:16:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:52.525 [2024-12-10 11:16:19.524294] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:52.525 [2024-12-10 11:16:19.525044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60398 ] 00:08:52.784 [2024-12-10 11:16:19.707276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:52.784 [2024-12-10 11:16:19.825009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.784 [2024-12-10 11:16:19.825120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.784 [2024-12-10 11:16:19.825152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60416 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60416 /var/tmp/spdk2.sock 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60416 /var/tmp/spdk2.sock 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60416 /var/tmp/spdk2.sock 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60416 ']' 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:53.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.722 11:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:53.981 [2024-12-10 11:16:20.858599] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:53.981 [2024-12-10 11:16:20.859140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60416 ] 00:08:53.981 [2024-12-10 11:16:21.048035] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60398 has claimed it. 00:08:53.981 [2024-12-10 11:16:21.048135] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:54.550 ERROR: process (pid: 60416) is no longer running 00:08:54.550 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60416) - No such process 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60398 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60398 ']' 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60398 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60398 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60398' 00:08:54.550 killing process with pid 60398 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60398 00:08:54.550 11:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60398 00:08:57.084 00:08:57.084 real 0m4.570s 00:08:57.084 user 0m12.422s 00:08:57.084 sys 0m0.665s 00:08:57.084 11:16:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.084 ************************************ 00:08:57.084 END TEST locking_overlapped_coremask 00:08:57.084 ************************************ 00:08:57.084 11:16:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:57.084 11:16:24 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:57.084 11:16:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.084 11:16:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.084 11:16:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:57.084 ************************************ 00:08:57.084 START TEST locking_overlapped_coremask_via_rpc 00:08:57.084 ************************************ 00:08:57.084 11:16:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:57.084 11:16:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:57.084 11:16:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60480 00:08:57.084 11:16:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60480 /var/tmp/spdk.sock 00:08:57.084 11:16:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60480 ']' 00:08:57.084 11:16:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.084 11:16:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.084 11:16:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.084 11:16:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.084 11:16:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.084 [2024-12-10 11:16:24.184137] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:57.084 [2024-12-10 11:16:24.184277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60480 ] 00:08:57.342 [2024-12-10 11:16:24.357319] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:57.342 [2024-12-10 11:16:24.357397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:57.601 [2024-12-10 11:16:24.483758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.601 [2024-12-10 11:16:24.483809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.601 [2024-12-10 11:16:24.483815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.537 11:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.537 11:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:58.537 11:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60503 00:08:58.537 11:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60503 /var/tmp/spdk2.sock 00:08:58.537 11:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:58.537 11:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60503 ']' 00:08:58.537 11:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:58.537 11:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.537 11:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:58.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:58.537 11:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.537 11:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.537 [2024-12-10 11:16:25.512207] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:58.537 [2024-12-10 11:16:25.512629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60503 ] 00:08:58.797 [2024-12-10 11:16:25.699713] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:58.797 [2024-12-10 11:16:25.699822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:59.056 [2024-12-10 11:16:25.951885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.056 [2024-12-10 11:16:25.955110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.056 [2024-12-10 11:16:25.955144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.593 [2024-12-10 11:16:28.203115] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60480 has claimed it. 00:09:01.593 request: 00:09:01.593 { 00:09:01.593 "method": "framework_enable_cpumask_locks", 00:09:01.593 "req_id": 1 00:09:01.593 } 00:09:01.593 Got JSON-RPC error response 00:09:01.593 response: 00:09:01.593 { 00:09:01.593 "code": -32603, 00:09:01.593 "message": "Failed to claim CPU core: 2" 00:09:01.593 } 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60480 /var/tmp/spdk.sock 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60480 ']' 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60503 /var/tmp/spdk2.sock 00:09:01.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60503 ']' 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:01.593 ************************************ 00:09:01.593 END TEST locking_overlapped_coremask_via_rpc 00:09:01.593 ************************************ 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:01.593 00:09:01.593 real 0m4.592s 00:09:01.593 user 0m1.376s 00:09:01.593 sys 0m0.274s 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.593 11:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.852 11:16:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:01.852 11:16:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60480 ]] 00:09:01.852 11:16:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60480 00:09:01.852 11:16:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60480 ']' 00:09:01.852 11:16:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60480 00:09:01.852 11:16:28 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:01.852 11:16:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.852 11:16:28 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60480 00:09:01.852 killing process with pid 60480 00:09:01.852 11:16:28 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.853 11:16:28 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.853 11:16:28 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60480' 00:09:01.853 11:16:28 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60480 00:09:01.853 11:16:28 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60480 00:09:04.436 11:16:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60503 ]] 00:09:04.436 11:16:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60503 00:09:04.436 11:16:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60503 ']' 00:09:04.436 11:16:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60503 00:09:04.436 11:16:31 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:04.436 11:16:31 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.436 11:16:31 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60503 00:09:04.436 killing process with pid 60503 00:09:04.436 11:16:31 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:04.436 11:16:31 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:04.436 11:16:31 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60503' 00:09:04.436 11:16:31 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60503 00:09:04.436 11:16:31 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60503 00:09:06.972 11:16:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:06.972 11:16:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:06.972 11:16:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60480 ]] 00:09:06.972 11:16:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60480 00:09:06.972 11:16:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60480 ']' 00:09:06.972 11:16:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60480 00:09:06.972 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60480) - No such process 00:09:06.972 Process with pid 60480 is not found 00:09:06.972 11:16:33 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60480 is not found' 00:09:06.972 11:16:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60503 ]] 00:09:06.972 11:16:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60503 00:09:06.972 11:16:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60503 ']' 00:09:06.972 Process with pid 60503 is not found 00:09:06.972 11:16:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60503 00:09:06.972 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60503) - No such process 00:09:06.972 11:16:33 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60503 is not found' 00:09:06.972 11:16:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:06.972 00:09:06.972 real 0m52.967s 00:09:06.972 user 1m29.802s 00:09:06.972 sys 0m7.471s 00:09:06.972 11:16:33 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.972 11:16:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:06.972 ************************************ 00:09:06.972 END TEST cpu_locks 00:09:06.972 ************************************ 00:09:06.972 ************************************ 00:09:06.972 END TEST event 00:09:06.972 ************************************ 00:09:06.972 00:09:06.972 real 1m25.428s 00:09:06.972 user 2m34.229s 00:09:06.972 sys 0m12.073s 00:09:06.972 11:16:33 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.972 11:16:33 event -- common/autotest_common.sh@10 -- # set +x 00:09:06.972 11:16:33 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:06.972 11:16:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.972 11:16:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.972 11:16:33 -- common/autotest_common.sh@10 -- # set +x 00:09:06.972 ************************************ 00:09:06.972 START TEST thread 00:09:06.972 ************************************ 00:09:06.972 11:16:33 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:06.972 * Looking for test storage... 00:09:06.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:06.972 11:16:33 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:06.972 11:16:33 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:09:06.972 11:16:33 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:06.972 11:16:34 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:06.972 11:16:34 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.972 11:16:34 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.972 11:16:34 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.972 11:16:34 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.972 11:16:34 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.972 11:16:34 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.972 11:16:34 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.972 11:16:34 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.972 11:16:34 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.972 11:16:34 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.972 11:16:34 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.972 11:16:34 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:06.972 11:16:34 thread -- scripts/common.sh@345 -- # : 1 00:09:06.972 11:16:34 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.972 11:16:34 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.972 11:16:34 thread -- scripts/common.sh@365 -- # decimal 1 00:09:06.972 11:16:34 thread -- scripts/common.sh@353 -- # local d=1 00:09:06.972 11:16:34 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.972 11:16:34 thread -- scripts/common.sh@355 -- # echo 1 00:09:06.972 11:16:34 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.972 11:16:34 thread -- scripts/common.sh@366 -- # decimal 2 00:09:06.972 11:16:34 thread -- scripts/common.sh@353 -- # local d=2 00:09:06.972 11:16:34 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.972 11:16:34 thread -- scripts/common.sh@355 -- # echo 2 00:09:06.972 11:16:34 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.972 11:16:34 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.972 11:16:34 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.972 11:16:34 thread -- scripts/common.sh@368 -- # return 0 00:09:06.972 11:16:34 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.972 11:16:34 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:06.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.972 --rc genhtml_branch_coverage=1 00:09:06.972 --rc genhtml_function_coverage=1 00:09:06.972 --rc genhtml_legend=1 00:09:06.972 --rc geninfo_all_blocks=1 00:09:06.972 --rc geninfo_unexecuted_blocks=1 00:09:06.972 00:09:06.972 ' 00:09:06.972 11:16:34 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:06.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.972 --rc genhtml_branch_coverage=1 00:09:06.972 --rc genhtml_function_coverage=1 00:09:06.972 --rc genhtml_legend=1 00:09:06.972 --rc geninfo_all_blocks=1 00:09:06.972 --rc geninfo_unexecuted_blocks=1 00:09:06.972 00:09:06.972 ' 00:09:06.972 11:16:34 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:06.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.972 --rc genhtml_branch_coverage=1 00:09:06.972 --rc genhtml_function_coverage=1 00:09:06.972 --rc genhtml_legend=1 00:09:06.972 --rc geninfo_all_blocks=1 00:09:06.972 --rc geninfo_unexecuted_blocks=1 00:09:06.972 00:09:06.972 ' 00:09:06.972 11:16:34 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:06.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.972 --rc genhtml_branch_coverage=1 00:09:06.972 --rc genhtml_function_coverage=1 00:09:06.972 --rc genhtml_legend=1 00:09:06.972 --rc geninfo_all_blocks=1 00:09:06.972 --rc geninfo_unexecuted_blocks=1 00:09:06.972 00:09:06.972 ' 00:09:06.972 11:16:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:06.972 11:16:34 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:06.972 11:16:34 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.972 11:16:34 thread -- common/autotest_common.sh@10 -- # set +x 00:09:06.972 ************************************ 00:09:06.972 START TEST thread_poller_perf 00:09:06.972 ************************************ 00:09:06.972 11:16:34 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:07.232 [2024-12-10 11:16:34.123980] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:07.232 [2024-12-10 11:16:34.124284] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60704 ] 00:09:07.232 [2024-12-10 11:16:34.306457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.491 [2024-12-10 11:16:34.421570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.491 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:08.913 [2024-12-10T11:16:36.027Z] ====================================== 00:09:08.913 [2024-12-10T11:16:36.027Z] busy:2499471254 (cyc) 00:09:08.913 [2024-12-10T11:16:36.027Z] total_run_count: 385000 00:09:08.913 [2024-12-10T11:16:36.027Z] tsc_hz: 2490000000 (cyc) 00:09:08.913 [2024-12-10T11:16:36.027Z] ====================================== 00:09:08.913 [2024-12-10T11:16:36.027Z] poller_cost: 6492 (cyc), 2607 (nsec) 00:09:08.913 00:09:08.913 real 0m1.586s 00:09:08.913 user 0m1.367s 00:09:08.913 sys 0m0.110s 00:09:08.913 ************************************ 00:09:08.913 END TEST thread_poller_perf 00:09:08.913 ************************************ 00:09:08.913 11:16:35 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.913 11:16:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:08.913 11:16:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:08.913 11:16:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:08.913 11:16:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.913 11:16:35 thread -- common/autotest_common.sh@10 -- # set +x 00:09:08.913 ************************************ 00:09:08.913 START TEST thread_poller_perf 00:09:08.913 ************************************ 00:09:08.913 11:16:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:08.913 [2024-12-10 11:16:35.790183] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:08.913 [2024-12-10 11:16:35.790494] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60735 ] 00:09:08.913 [2024-12-10 11:16:35.973032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.171 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:09.171 [2024-12-10 11:16:36.097038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.551 [2024-12-10T11:16:37.665Z] ====================================== 00:09:10.551 [2024-12-10T11:16:37.665Z] busy:2494151682 (cyc) 00:09:10.551 [2024-12-10T11:16:37.665Z] total_run_count: 4360000 00:09:10.551 [2024-12-10T11:16:37.665Z] tsc_hz: 2490000000 (cyc) 00:09:10.551 [2024-12-10T11:16:37.665Z] ====================================== 00:09:10.551 [2024-12-10T11:16:37.665Z] poller_cost: 572 (cyc), 229 (nsec) 00:09:10.551 00:09:10.551 real 0m1.592s 00:09:10.551 user 0m1.389s 00:09:10.551 sys 0m0.095s 00:09:10.551 11:16:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.551 11:16:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:10.551 ************************************ 00:09:10.551 END TEST thread_poller_perf 00:09:10.551 ************************************ 00:09:10.551 11:16:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:10.551 00:09:10.551 real 0m3.561s 00:09:10.551 user 0m2.942s 00:09:10.551 sys 0m0.411s 00:09:10.551 ************************************ 00:09:10.551 END TEST thread 00:09:10.551 ************************************ 00:09:10.551 11:16:37 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.551 11:16:37 thread -- common/autotest_common.sh@10 -- # set +x 00:09:10.551 11:16:37 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:10.551 11:16:37 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:10.551 11:16:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.551 11:16:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.551 11:16:37 -- common/autotest_common.sh@10 -- # set +x 00:09:10.551 ************************************ 00:09:10.551 START TEST app_cmdline 00:09:10.551 ************************************ 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:10.551 * Looking for test storage... 00:09:10.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:10.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.551 11:16:37 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:10.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.551 --rc genhtml_branch_coverage=1 00:09:10.551 --rc genhtml_function_coverage=1 00:09:10.551 --rc genhtml_legend=1 00:09:10.551 --rc geninfo_all_blocks=1 00:09:10.551 --rc geninfo_unexecuted_blocks=1 00:09:10.551 00:09:10.551 ' 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:10.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.551 --rc genhtml_branch_coverage=1 00:09:10.551 --rc genhtml_function_coverage=1 00:09:10.551 --rc genhtml_legend=1 00:09:10.551 --rc geninfo_all_blocks=1 00:09:10.551 --rc geninfo_unexecuted_blocks=1 00:09:10.551 00:09:10.551 ' 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:10.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.551 --rc genhtml_branch_coverage=1 00:09:10.551 --rc genhtml_function_coverage=1 00:09:10.551 --rc genhtml_legend=1 00:09:10.551 --rc geninfo_all_blocks=1 00:09:10.551 --rc geninfo_unexecuted_blocks=1 00:09:10.551 00:09:10.551 ' 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:10.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.551 --rc genhtml_branch_coverage=1 00:09:10.551 --rc genhtml_function_coverage=1 00:09:10.551 --rc genhtml_legend=1 00:09:10.551 --rc geninfo_all_blocks=1 00:09:10.551 --rc geninfo_unexecuted_blocks=1 00:09:10.551 00:09:10.551 ' 00:09:10.551 11:16:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:10.551 11:16:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60824 00:09:10.551 11:16:37 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:10.551 11:16:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60824 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60824 ']' 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.551 11:16:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:10.810 [2024-12-10 11:16:37.806711] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:10.810 [2024-12-10 11:16:37.808171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60824 ] 00:09:11.070 [2024-12-10 11:16:38.017079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.070 [2024-12-10 11:16:38.141585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.003 11:16:39 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.003 11:16:39 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:12.003 11:16:39 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:12.261 { 00:09:12.261 "version": "SPDK v25.01-pre git sha1 52a413487", 00:09:12.261 "fields": { 00:09:12.261 "major": 25, 00:09:12.261 "minor": 1, 00:09:12.261 "patch": 0, 00:09:12.261 "suffix": "-pre", 00:09:12.261 "commit": "52a413487" 00:09:12.261 } 00:09:12.261 } 00:09:12.261 11:16:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:12.261 11:16:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:12.261 11:16:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:12.261 11:16:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:12.261 11:16:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:12.261 11:16:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:12.261 11:16:39 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.261 11:16:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:12.261 11:16:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:12.261 11:16:39 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.261 11:16:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:12.261 11:16:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:12.261 11:16:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:12.261 11:16:39 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:12.261 11:16:39 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:12.261 11:16:39 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.261 11:16:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.261 11:16:39 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.261 11:16:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.261 11:16:39 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.261 11:16:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.261 11:16:39 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.261 11:16:39 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:12.261 11:16:39 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:12.519 request: 00:09:12.519 { 00:09:12.519 "method": "env_dpdk_get_mem_stats", 00:09:12.519 "req_id": 1 00:09:12.519 } 00:09:12.519 Got JSON-RPC error response 00:09:12.519 response: 00:09:12.519 { 00:09:12.519 "code": -32601, 00:09:12.519 "message": "Method not found" 00:09:12.519 } 00:09:12.519 11:16:39 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:12.519 11:16:39 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:12.519 11:16:39 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:12.519 11:16:39 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:12.519 11:16:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60824 00:09:12.519 11:16:39 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60824 ']' 00:09:12.519 11:16:39 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60824 00:09:12.519 11:16:39 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:12.778 11:16:39 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.778 11:16:39 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60824 00:09:12.778 killing process with pid 60824 00:09:12.778 11:16:39 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.778 11:16:39 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.778 11:16:39 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60824' 00:09:12.778 11:16:39 app_cmdline -- common/autotest_common.sh@973 -- # kill 60824 00:09:12.778 11:16:39 app_cmdline -- common/autotest_common.sh@978 -- # wait 60824 00:09:15.325 00:09:15.325 real 0m4.690s 00:09:15.325 user 0m4.906s 00:09:15.325 sys 0m0.679s 00:09:15.325 11:16:42 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.325 ************************************ 00:09:15.325 END TEST app_cmdline 00:09:15.325 ************************************ 00:09:15.325 11:16:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:15.325 11:16:42 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:15.325 11:16:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.325 11:16:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.325 11:16:42 -- common/autotest_common.sh@10 -- # set +x 00:09:15.325 ************************************ 00:09:15.325 START TEST version 00:09:15.325 ************************************ 00:09:15.325 11:16:42 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:15.325 * Looking for test storage... 00:09:15.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:15.325 11:16:42 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:15.326 11:16:42 version -- common/autotest_common.sh@1711 -- # lcov --version 00:09:15.326 11:16:42 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:15.326 11:16:42 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:15.326 11:16:42 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.326 11:16:42 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.326 11:16:42 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.326 11:16:42 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.326 11:16:42 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.326 11:16:42 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.326 11:16:42 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.326 11:16:42 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.326 11:16:42 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.326 11:16:42 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.326 11:16:42 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.326 11:16:42 version -- scripts/common.sh@344 -- # case "$op" in 00:09:15.326 11:16:42 version -- scripts/common.sh@345 -- # : 1 00:09:15.326 11:16:42 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.326 11:16:42 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.326 11:16:42 version -- scripts/common.sh@365 -- # decimal 1 00:09:15.326 11:16:42 version -- scripts/common.sh@353 -- # local d=1 00:09:15.326 11:16:42 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.326 11:16:42 version -- scripts/common.sh@355 -- # echo 1 00:09:15.326 11:16:42 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.326 11:16:42 version -- scripts/common.sh@366 -- # decimal 2 00:09:15.326 11:16:42 version -- scripts/common.sh@353 -- # local d=2 00:09:15.326 11:16:42 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.326 11:16:42 version -- scripts/common.sh@355 -- # echo 2 00:09:15.585 11:16:42 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.585 11:16:42 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.585 11:16:42 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.586 11:16:42 version -- scripts/common.sh@368 -- # return 0 00:09:15.586 11:16:42 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.586 11:16:42 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:15.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.586 --rc genhtml_branch_coverage=1 00:09:15.586 --rc genhtml_function_coverage=1 00:09:15.586 --rc genhtml_legend=1 00:09:15.586 --rc geninfo_all_blocks=1 00:09:15.586 --rc geninfo_unexecuted_blocks=1 00:09:15.586 00:09:15.586 ' 00:09:15.586 11:16:42 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:15.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.586 --rc genhtml_branch_coverage=1 00:09:15.586 --rc genhtml_function_coverage=1 00:09:15.586 --rc genhtml_legend=1 00:09:15.586 --rc geninfo_all_blocks=1 00:09:15.586 --rc geninfo_unexecuted_blocks=1 00:09:15.586 00:09:15.586 ' 00:09:15.586 11:16:42 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:15.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.586 --rc genhtml_branch_coverage=1 00:09:15.586 --rc genhtml_function_coverage=1 00:09:15.586 --rc genhtml_legend=1 00:09:15.586 --rc geninfo_all_blocks=1 00:09:15.586 --rc geninfo_unexecuted_blocks=1 00:09:15.586 00:09:15.586 ' 00:09:15.586 11:16:42 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:15.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.586 --rc genhtml_branch_coverage=1 00:09:15.586 --rc genhtml_function_coverage=1 00:09:15.586 --rc genhtml_legend=1 00:09:15.586 --rc geninfo_all_blocks=1 00:09:15.586 --rc geninfo_unexecuted_blocks=1 00:09:15.586 00:09:15.586 ' 00:09:15.586 11:16:42 version -- app/version.sh@17 -- # get_header_version major 00:09:15.586 11:16:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:15.586 11:16:42 version -- app/version.sh@14 -- # cut -f2 00:09:15.586 11:16:42 version -- app/version.sh@14 -- # tr -d '"' 00:09:15.586 11:16:42 version -- app/version.sh@17 -- # major=25 00:09:15.586 11:16:42 version -- app/version.sh@18 -- # get_header_version minor 00:09:15.586 11:16:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:15.586 11:16:42 version -- app/version.sh@14 -- # tr -d '"' 00:09:15.586 11:16:42 version -- app/version.sh@14 -- # cut -f2 00:09:15.586 11:16:42 version -- app/version.sh@18 -- # minor=1 00:09:15.586 11:16:42 version -- app/version.sh@19 -- # get_header_version patch 00:09:15.586 11:16:42 version -- app/version.sh@14 -- # cut -f2 00:09:15.586 11:16:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:15.586 11:16:42 version -- app/version.sh@14 -- # tr -d '"' 00:09:15.586 11:16:42 version -- app/version.sh@19 -- # patch=0 00:09:15.586 11:16:42 version -- app/version.sh@20 -- # get_header_version suffix 00:09:15.586 11:16:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:15.586 11:16:42 version -- app/version.sh@14 -- # cut -f2 00:09:15.586 11:16:42 version -- app/version.sh@14 -- # tr -d '"' 00:09:15.586 11:16:42 version -- app/version.sh@20 -- # suffix=-pre 00:09:15.586 11:16:42 version -- app/version.sh@22 -- # version=25.1 00:09:15.586 11:16:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:15.586 11:16:42 version -- app/version.sh@28 -- # version=25.1rc0 00:09:15.586 11:16:42 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:15.586 11:16:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:15.586 11:16:42 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:15.586 11:16:42 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:15.586 00:09:15.586 real 0m0.326s 00:09:15.586 user 0m0.183s 00:09:15.586 sys 0m0.205s 00:09:15.586 11:16:42 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.586 ************************************ 00:09:15.586 END TEST version 00:09:15.586 ************************************ 00:09:15.586 11:16:42 version -- common/autotest_common.sh@10 -- # set +x 00:09:15.586 11:16:42 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:15.586 11:16:42 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:15.586 11:16:42 -- spdk/autotest.sh@194 -- # uname -s 00:09:15.586 11:16:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:15.586 11:16:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:15.586 11:16:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:15.586 11:16:42 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:09:15.586 11:16:42 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:15.586 11:16:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:15.586 11:16:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.586 11:16:42 -- common/autotest_common.sh@10 -- # set +x 00:09:15.586 ************************************ 00:09:15.586 START TEST blockdev_nvme 00:09:15.586 ************************************ 00:09:15.586 11:16:42 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:15.846 * Looking for test storage... 00:09:15.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:15.846 11:16:42 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:15.846 11:16:42 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:09:15.846 11:16:42 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:15.846 11:16:42 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.846 11:16:42 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:09:15.847 11:16:42 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.847 11:16:42 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:09:15.847 11:16:42 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:09:15.847 11:16:42 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.847 11:16:42 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:09:15.847 11:16:42 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.847 11:16:42 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.847 11:16:42 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.847 11:16:42 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:09:15.847 11:16:42 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.847 11:16:42 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:15.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.847 --rc genhtml_branch_coverage=1 00:09:15.847 --rc genhtml_function_coverage=1 00:09:15.847 --rc genhtml_legend=1 00:09:15.847 --rc geninfo_all_blocks=1 00:09:15.847 --rc geninfo_unexecuted_blocks=1 00:09:15.847 00:09:15.847 ' 00:09:15.847 11:16:42 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:15.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.847 --rc genhtml_branch_coverage=1 00:09:15.847 --rc genhtml_function_coverage=1 00:09:15.847 --rc genhtml_legend=1 00:09:15.847 --rc geninfo_all_blocks=1 00:09:15.847 --rc geninfo_unexecuted_blocks=1 00:09:15.847 00:09:15.847 ' 00:09:15.847 11:16:42 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:15.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.847 --rc genhtml_branch_coverage=1 00:09:15.847 --rc genhtml_function_coverage=1 00:09:15.847 --rc genhtml_legend=1 00:09:15.847 --rc geninfo_all_blocks=1 00:09:15.847 --rc geninfo_unexecuted_blocks=1 00:09:15.847 00:09:15.847 ' 00:09:15.847 11:16:42 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:15.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.847 --rc genhtml_branch_coverage=1 00:09:15.847 --rc genhtml_function_coverage=1 00:09:15.847 --rc genhtml_legend=1 00:09:15.847 --rc geninfo_all_blocks=1 00:09:15.847 --rc geninfo_unexecuted_blocks=1 00:09:15.847 00:09:15.847 ' 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:15.847 11:16:42 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61019 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:15.847 11:16:42 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61019 00:09:15.847 11:16:42 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61019 ']' 00:09:15.847 11:16:42 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.847 11:16:42 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.847 11:16:42 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.847 11:16:42 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.847 11:16:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:16.106 [2024-12-10 11:16:42.987365] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:16.106 [2024-12-10 11:16:42.988272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61019 ] 00:09:16.106 [2024-12-10 11:16:43.169562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.365 [2024-12-10 11:16:43.288601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.303 11:16:44 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.303 11:16:44 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:09:17.303 11:16:44 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:09:17.303 11:16:44 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:09:17.303 11:16:44 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:09:17.303 11:16:44 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:17.303 11:16:44 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:17.303 11:16:44 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:17.303 11:16:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.303 11:16:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.562 11:16:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.562 11:16:44 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:09:17.562 11:16:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.562 11:16:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.562 11:16:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.562 11:16:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:09:17.562 11:16:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:09:17.562 11:16:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.563 11:16:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.563 11:16:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.563 11:16:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:09:17.563 11:16:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.563 11:16:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.563 11:16:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.563 11:16:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:17.563 11:16:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.563 11:16:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.822 11:16:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.822 11:16:44 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:09:17.822 11:16:44 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:09:17.822 11:16:44 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:09:17.822 11:16:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.822 11:16:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.822 11:16:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.822 11:16:44 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:09:17.822 11:16:44 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:09:17.823 11:16:44 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9dcfb0c6-5d67-4d01-be40-9e0598d0d842"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9dcfb0c6-5d67-4d01-be40-9e0598d0d842",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "9d8215be-3c71-4c7d-bf39-e381aa4efe2a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9d8215be-3c71-4c7d-bf39-e381aa4efe2a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "05bd1d54-d0a3-49fd-8416-3dbca43cc7a8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "05bd1d54-d0a3-49fd-8416-3dbca43cc7a8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "967a2e37-33ba-4591-a768-a24a06a345d6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "967a2e37-33ba-4591-a768-a24a06a345d6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "740545b9-c70a-4841-b4e5-867bba50e6ab"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "740545b9-c70a-4841-b4e5-867bba50e6ab",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b76f4dbd-107d-4076-ae17-cbb3174a34ef"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b76f4dbd-107d-4076-ae17-cbb3174a34ef",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:17.823 11:16:44 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:09:17.823 11:16:44 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:09:17.823 11:16:44 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:09:17.823 11:16:44 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61019 00:09:17.823 11:16:44 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61019 ']' 00:09:17.823 11:16:44 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61019 00:09:17.823 11:16:44 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:09:17.823 11:16:44 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.823 11:16:44 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61019 00:09:17.823 11:16:44 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.823 killing process with pid 61019 00:09:17.823 11:16:44 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.823 11:16:44 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61019' 00:09:17.823 11:16:44 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61019 00:09:17.823 11:16:44 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61019 00:09:20.357 11:16:47 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:20.357 11:16:47 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:20.357 11:16:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:20.357 11:16:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.357 11:16:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.357 ************************************ 00:09:20.357 START TEST bdev_hello_world 00:09:20.357 ************************************ 00:09:20.357 11:16:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:20.357 [2024-12-10 11:16:47.403089] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:20.357 [2024-12-10 11:16:47.403544] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61120 ] 00:09:20.616 [2024-12-10 11:16:47.586321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.616 [2024-12-10 11:16:47.707215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.552 [2024-12-10 11:16:48.385201] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:21.552 [2024-12-10 11:16:48.385263] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:21.552 [2024-12-10 11:16:48.385294] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:21.552 [2024-12-10 11:16:48.388361] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:21.552 [2024-12-10 11:16:48.388849] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:21.552 [2024-12-10 11:16:48.388878] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:21.552 [2024-12-10 11:16:48.389166] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:21.552 00:09:21.552 [2024-12-10 11:16:48.389199] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:22.513 00:09:22.513 real 0m2.237s 00:09:22.513 user 0m1.861s 00:09:22.513 sys 0m0.266s 00:09:22.513 11:16:49 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.513 11:16:49 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:22.513 ************************************ 00:09:22.513 END TEST bdev_hello_world 00:09:22.513 ************************************ 00:09:22.513 11:16:49 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:09:22.513 11:16:49 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.513 11:16:49 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.513 11:16:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.513 ************************************ 00:09:22.513 START TEST bdev_bounds 00:09:22.513 ************************************ 00:09:22.513 11:16:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:09:22.513 11:16:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61162 00:09:22.513 11:16:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:22.513 Process bdevio pid: 61162 00:09:22.513 11:16:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:22.513 11:16:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61162' 00:09:22.513 11:16:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61162 00:09:22.513 11:16:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61162 ']' 00:09:22.513 11:16:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.513 11:16:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.513 11:16:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.513 11:16:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.513 11:16:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:22.773 [2024-12-10 11:16:49.702506] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:22.773 [2024-12-10 11:16:49.702820] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61162 ] 00:09:23.031 [2024-12-10 11:16:49.886743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:23.031 [2024-12-10 11:16:50.013753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.032 [2024-12-10 11:16:50.013894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.032 [2024-12-10 11:16:50.013957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.969 11:16:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.969 11:16:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:09:23.969 11:16:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:23.969 I/O targets: 00:09:23.969 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:23.969 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:23.969 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:23.969 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:23.969 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:23.969 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:23.969 00:09:23.969 00:09:23.969 CUnit - A unit testing framework for C - Version 2.1-3 00:09:23.969 http://cunit.sourceforge.net/ 00:09:23.969 00:09:23.969 00:09:23.969 Suite: bdevio tests on: Nvme3n1 00:09:23.969 Test: blockdev write read block ...passed 00:09:23.969 Test: blockdev write zeroes read block ...passed 00:09:23.969 Test: blockdev write zeroes read no split ...passed 00:09:23.969 Test: blockdev write zeroes read split ...passed 00:09:23.969 Test: blockdev write zeroes read split partial ...passed 00:09:23.969 Test: blockdev reset ...[2024-12-10 11:16:50.883237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:09:23.969 [2024-12-10 11:16:50.887325] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:09:23.969 Test: blockdev write read 8 blocks ...uccessful. 00:09:23.969 passed 00:09:23.969 Test: blockdev write read size > 128k ...passed 00:09:23.969 Test: blockdev write read invalid size ...passed 00:09:23.969 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:23.969 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:23.969 Test: blockdev write read max offset ...passed 00:09:23.969 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:23.969 Test: blockdev writev readv 8 blocks ...passed 00:09:23.969 Test: blockdev writev readv 30 x 1block ...passed 00:09:23.969 Test: blockdev writev readv block ...passed 00:09:23.969 Test: blockdev writev readv size > 128k ...passed 00:09:23.969 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:23.969 Test: blockdev comparev and writev ...[2024-12-10 11:16:50.896596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba20a000 len:0x1000 00:09:23.969 [2024-12-10 11:16:50.896649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:23.969 passed 00:09:23.969 Test: blockdev nvme passthru rw ...passed 00:09:23.969 Test: blockdev nvme passthru vendor specific ...passed 00:09:23.969 Test: blockdev nvme admin passthru ...[2024-12-10 11:16:50.897567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:23.969 [2024-12-10 11:16:50.897614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:23.969 passed 00:09:23.969 Test: blockdev copy ...passed 00:09:23.969 Suite: bdevio tests on: Nvme2n3 00:09:23.969 Test: blockdev write read block ...passed 00:09:23.969 Test: blockdev write zeroes read block ...passed 00:09:23.969 Test: blockdev write zeroes read no split ...passed 00:09:23.969 Test: blockdev write zeroes read split ...passed 00:09:23.969 Test: blockdev write zeroes read split partial ...passed 00:09:23.969 Test: blockdev reset ...[2024-12-10 11:16:50.973416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:23.969 [2024-12-10 11:16:50.977658] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:09:23.969 Test: blockdev write read 8 blocks ...uccessful. 00:09:23.969 passed 00:09:23.969 Test: blockdev write read size > 128k ...passed 00:09:23.969 Test: blockdev write read invalid size ...passed 00:09:23.969 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:23.969 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:23.969 Test: blockdev write read max offset ...passed 00:09:23.969 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:23.969 Test: blockdev writev readv 8 blocks ...passed 00:09:23.969 Test: blockdev writev readv 30 x 1block ...passed 00:09:23.969 Test: blockdev writev readv block ...passed 00:09:23.969 Test: blockdev writev readv size > 128k ...passed 00:09:23.969 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:23.969 Test: blockdev comparev and writev ...[2024-12-10 11:16:50.986596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29cc06000 len:0x1000 00:09:23.969 [2024-12-10 11:16:50.986648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:23.969 passed 00:09:23.969 Test: blockdev nvme passthru rw ...passed 00:09:23.969 Test: blockdev nvme passthru vendor specific ...[2024-12-10 11:16:50.987528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:09:23.969 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:09:23.969 [2024-12-10 11:16:50.987693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:23.969 passed 00:09:23.969 Test: blockdev copy ...passed 00:09:23.969 Suite: bdevio tests on: Nvme2n2 00:09:23.969 Test: blockdev write read block ...passed 00:09:23.969 Test: blockdev write zeroes read block ...passed 00:09:23.969 Test: blockdev write zeroes read no split ...passed 00:09:23.969 Test: blockdev write zeroes read split ...passed 00:09:23.969 Test: blockdev write zeroes read split partial ...passed 00:09:23.969 Test: blockdev reset ...[2024-12-10 11:16:51.065898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:23.969 [2024-12-10 11:16:51.070049] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:09:23.969 Test: blockdev write read 8 blocks ...uccessful. 00:09:23.969 passed 00:09:23.969 Test: blockdev write read size > 128k ...passed 00:09:23.969 Test: blockdev write read invalid size ...passed 00:09:23.969 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:23.969 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:23.969 Test: blockdev write read max offset ...passed 00:09:23.969 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:23.969 Test: blockdev writev readv 8 blocks ...passed 00:09:23.969 Test: blockdev writev readv 30 x 1block ...passed 00:09:23.969 Test: blockdev writev readv block ...passed 00:09:23.969 Test: blockdev writev readv size > 128k ...passed 00:09:23.969 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:23.969 Test: blockdev comparev and writev ...[2024-12-10 11:16:51.078665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca23c000 len:0x1000 00:09:23.969 [2024-12-10 11:16:51.078881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:23.969 passed 00:09:23.969 Test: blockdev nvme passthru rw ...passed 00:09:23.969 Test: blockdev nvme passthru vendor specific ...[2024-12-10 11:16:51.079818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:23.969 [2024-12-10 11:16:51.079990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:09:23.969 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:09:24.228 passed 00:09:24.228 Test: blockdev copy ...passed 00:09:24.228 Suite: bdevio tests on: Nvme2n1 00:09:24.228 Test: blockdev write read block ...passed 00:09:24.228 Test: blockdev write zeroes read block ...passed 00:09:24.228 Test: blockdev write zeroes read no split ...passed 00:09:24.228 Test: blockdev write zeroes read split ...passed 00:09:24.228 Test: blockdev write zeroes read split partial ...passed 00:09:24.229 Test: blockdev reset ...[2024-12-10 11:16:51.155567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:24.229 [2024-12-10 11:16:51.159580] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:09:24.229 Test: blockdev write read 8 blocks ...uccessful. 00:09:24.229 passed 00:09:24.229 Test: blockdev write read size > 128k ...passed 00:09:24.229 Test: blockdev write read invalid size ...passed 00:09:24.229 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.229 Test: blockdev write read max offset ...passed 00:09:24.229 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.229 Test: blockdev writev readv 8 blocks ...passed 00:09:24.229 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.229 Test: blockdev writev readv block ...passed 00:09:24.229 Test: blockdev writev readv size > 128k ...passed 00:09:24.229 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.229 Test: blockdev comparev and writev ...[2024-12-10 11:16:51.167721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca238000 len:0x1000 00:09:24.229 [2024-12-10 11:16:51.167796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:24.229 passed 00:09:24.229 Test: blockdev nvme passthru rw ...passed 00:09:24.229 Test: blockdev nvme passthru vendor specific ...[2024-12-10 11:16:51.168684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:24.229 [2024-12-10 11:16:51.168735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:24.229 passed 00:09:24.229 Test: blockdev nvme admin passthru ...passed 00:09:24.229 Test: blockdev copy ...passed 00:09:24.229 Suite: bdevio tests on: Nvme1n1 00:09:24.229 Test: blockdev write read block ...passed 00:09:24.229 Test: blockdev write zeroes read block ...passed 00:09:24.229 Test: blockdev write zeroes read no split ...passed 00:09:24.229 Test: blockdev write zeroes read split ...passed 00:09:24.229 Test: blockdev write zeroes read split partial ...passed 00:09:24.229 Test: blockdev reset ...[2024-12-10 11:16:51.248732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:24.229 [2024-12-10 11:16:51.252938] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:09:24.229 Test: blockdev write read 8 blocks ...uccessful. 00:09:24.229 passed 00:09:24.229 Test: blockdev write read size > 128k ...passed 00:09:24.229 Test: blockdev write read invalid size ...passed 00:09:24.229 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.229 Test: blockdev write read max offset ...passed 00:09:24.229 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.229 Test: blockdev writev readv 8 blocks ...passed 00:09:24.229 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.229 Test: blockdev writev readv block ...passed 00:09:24.229 Test: blockdev writev readv size > 128k ...passed 00:09:24.229 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.229 Test: blockdev comparev and writev ...[2024-12-10 11:16:51.261680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca234000 len:0x1000 00:09:24.229 [2024-12-10 11:16:51.261739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:24.229 passed 00:09:24.229 Test: blockdev nvme passthru rw ...passed 00:09:24.229 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.229 Test: blockdev nvme admin passthru ...[2024-12-10 11:16:51.262547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:24.229 [2024-12-10 11:16:51.262588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:24.229 passed 00:09:24.229 Test: blockdev copy ...passed 00:09:24.229 Suite: bdevio tests on: Nvme0n1 00:09:24.229 Test: blockdev write read block ...passed 00:09:24.229 Test: blockdev write zeroes read block ...passed 00:09:24.229 Test: blockdev write zeroes read no split ...passed 00:09:24.229 Test: blockdev write zeroes read split ...passed 00:09:24.229 Test: blockdev write zeroes read split partial ...passed 00:09:24.229 Test: blockdev reset ...[2024-12-10 11:16:51.338997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:24.488 [2024-12-10 11:16:51.342840] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:09:24.488 Test: blockdev write read 8 blocks ...uccessful. 00:09:24.488 passed 00:09:24.488 Test: blockdev write read size > 128k ...passed 00:09:24.488 Test: blockdev write read invalid size ...passed 00:09:24.488 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.488 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.488 Test: blockdev write read max offset ...passed 00:09:24.488 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.488 Test: blockdev writev readv 8 blocks ...passed 00:09:24.488 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.488 Test: blockdev writev readv block ...passed 00:09:24.488 Test: blockdev writev readv size > 128k ...passed 00:09:24.488 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.488 Test: blockdev comparev and writev ...passed 00:09:24.488 Test: blockdev nvme passthru rw ...[2024-12-10 11:16:51.352002] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:24.488 separate metadata which is not supported yet. 00:09:24.488 passed 00:09:24.488 Test: blockdev nvme passthru vendor specific ...[2024-12-10 11:16:51.352704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:24.488 [2024-12-10 11:16:51.352938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:09:24.488 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:09:24.488 passed 00:09:24.488 Test: blockdev copy ...passed 00:09:24.488 00:09:24.488 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.488 suites 6 6 n/a 0 0 00:09:24.488 tests 138 138 138 0 0 00:09:24.488 asserts 893 893 893 0 n/a 00:09:24.488 00:09:24.488 Elapsed time = 1.476 seconds 00:09:24.488 0 00:09:24.488 11:16:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61162 00:09:24.488 11:16:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61162 ']' 00:09:24.488 11:16:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61162 00:09:24.488 11:16:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:09:24.488 11:16:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.488 11:16:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61162 00:09:24.488 killing process with pid 61162 00:09:24.488 11:16:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.489 11:16:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.489 11:16:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61162' 00:09:24.489 11:16:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61162 00:09:24.489 11:16:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61162 00:09:25.426 ************************************ 00:09:25.426 END TEST bdev_bounds 00:09:25.426 ************************************ 00:09:25.426 11:16:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:25.426 00:09:25.426 real 0m2.877s 00:09:25.426 user 0m7.312s 00:09:25.426 sys 0m0.438s 00:09:25.426 11:16:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.426 11:16:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:25.685 11:16:52 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:25.685 11:16:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:25.685 11:16:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.685 11:16:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.685 ************************************ 00:09:25.685 START TEST bdev_nbd 00:09:25.685 ************************************ 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:25.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61227 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61227 /var/tmp/spdk-nbd.sock 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61227 ']' 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:25.686 11:16:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:25.686 [2024-12-10 11:16:52.656178] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:25.686 [2024-12-10 11:16:52.656547] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.944 [2024-12-10 11:16:52.841874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.944 [2024-12-10 11:16:52.958473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:26.918 1+0 records in 00:09:26.918 1+0 records out 00:09:26.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706753 s, 5.8 MB/s 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:26.918 11:16:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.198 1+0 records in 00:09:27.198 1+0 records out 00:09:27.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499376 s, 8.2 MB/s 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:27.198 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:27.456 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:27.456 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:27.456 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:27.456 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:09:27.456 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:27.456 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:27.457 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:27.457 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:09:27.457 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:27.457 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:27.457 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:27.457 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.457 1+0 records in 00:09:27.457 1+0 records out 00:09:27.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767795 s, 5.3 MB/s 00:09:27.457 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.457 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:27.457 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.457 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:27.457 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:27.457 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:27.457 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:27.457 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.716 1+0 records in 00:09:27.716 1+0 records out 00:09:27.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765692 s, 5.3 MB/s 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:27.716 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:27.974 11:16:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:27.974 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:27.974 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:27.974 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:27.974 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:09:27.974 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:27.974 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:27.974 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:27.974 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:09:27.974 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:27.974 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:27.974 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:27.975 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.975 1+0 records in 00:09:27.975 1+0 records out 00:09:27.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000774536 s, 5.3 MB/s 00:09:27.975 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:28.231 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:28.232 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.232 1+0 records in 00:09:28.232 1+0 records out 00:09:28.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0008211 s, 5.0 MB/s 00:09:28.232 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.490 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:28.490 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.490 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:28.490 11:16:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:28.490 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:28.490 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:28.490 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:28.490 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:28.490 { 00:09:28.490 "nbd_device": "/dev/nbd0", 00:09:28.490 "bdev_name": "Nvme0n1" 00:09:28.490 }, 00:09:28.490 { 00:09:28.490 "nbd_device": "/dev/nbd1", 00:09:28.490 "bdev_name": "Nvme1n1" 00:09:28.490 }, 00:09:28.490 { 00:09:28.490 "nbd_device": "/dev/nbd2", 00:09:28.490 "bdev_name": "Nvme2n1" 00:09:28.490 }, 00:09:28.490 { 00:09:28.490 "nbd_device": "/dev/nbd3", 00:09:28.490 "bdev_name": "Nvme2n2" 00:09:28.490 }, 00:09:28.490 { 00:09:28.490 "nbd_device": "/dev/nbd4", 00:09:28.490 "bdev_name": "Nvme2n3" 00:09:28.490 }, 00:09:28.490 { 00:09:28.490 "nbd_device": "/dev/nbd5", 00:09:28.490 "bdev_name": "Nvme3n1" 00:09:28.490 } 00:09:28.490 ]' 00:09:28.490 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:28.490 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:28.490 { 00:09:28.490 "nbd_device": "/dev/nbd0", 00:09:28.490 "bdev_name": "Nvme0n1" 00:09:28.490 }, 00:09:28.490 { 00:09:28.490 "nbd_device": "/dev/nbd1", 00:09:28.490 "bdev_name": "Nvme1n1" 00:09:28.490 }, 00:09:28.490 { 00:09:28.490 "nbd_device": "/dev/nbd2", 00:09:28.490 "bdev_name": "Nvme2n1" 00:09:28.490 }, 00:09:28.490 { 00:09:28.490 "nbd_device": "/dev/nbd3", 00:09:28.490 "bdev_name": "Nvme2n2" 00:09:28.490 }, 00:09:28.490 { 00:09:28.490 "nbd_device": "/dev/nbd4", 00:09:28.490 "bdev_name": "Nvme2n3" 00:09:28.490 }, 00:09:28.490 { 00:09:28.490 "nbd_device": "/dev/nbd5", 00:09:28.490 "bdev_name": "Nvme3n1" 00:09:28.490 } 00:09:28.490 ]' 00:09:28.490 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:28.749 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:28.749 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.749 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:28.749 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:28.749 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:28.749 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.749 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:28.749 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:28.750 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:28.750 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:28.750 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:28.750 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:28.750 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:28.750 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:28.750 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:28.750 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.750 11:16:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:29.007 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:29.008 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:29.008 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:29.008 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.008 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.008 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:29.008 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:29.008 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.008 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.008 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:29.265 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:29.266 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:29.266 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:29.266 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.266 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.266 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:29.266 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:29.266 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.266 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.266 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:29.524 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:29.524 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:29.524 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:29.524 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.524 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.524 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:29.524 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:29.524 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.524 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.524 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:29.782 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:29.782 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:29.782 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:29.782 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.782 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.782 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:29.782 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:29.782 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.782 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.782 11:16:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:30.040 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:30.040 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:30.040 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:30.040 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:30.040 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:30.040 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:30.040 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:30.040 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:30.040 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:30.040 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.040 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:30.299 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:30.557 /dev/nbd0 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:30.557 1+0 records in 00:09:30.557 1+0 records out 00:09:30.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569067 s, 7.2 MB/s 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:30.557 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:30.815 /dev/nbd1 00:09:30.815 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:30.815 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:30.815 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:30.815 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:30.815 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:30.815 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:30.815 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:30.815 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:30.815 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:30.815 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:30.815 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:30.816 1+0 records in 00:09:30.816 1+0 records out 00:09:30.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617128 s, 6.6 MB/s 00:09:30.816 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.816 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:30.816 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.816 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:30.816 11:16:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:30.816 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:30.816 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:30.816 11:16:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:31.074 /dev/nbd10 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.074 1+0 records in 00:09:31.074 1+0 records out 00:09:31.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000686885 s, 6.0 MB/s 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:31.074 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:31.390 /dev/nbd11 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.390 1+0 records in 00:09:31.390 1+0 records out 00:09:31.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631531 s, 6.5 MB/s 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:31.390 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:31.665 /dev/nbd12 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.665 1+0 records in 00:09:31.665 1+0 records out 00:09:31.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769899 s, 5.3 MB/s 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:31.665 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:31.924 /dev/nbd13 00:09:31.924 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:31.924 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:31.924 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:09:31.924 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:31.924 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:31.924 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:31.925 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:09:31.925 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:31.925 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:31.925 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:31.925 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.925 1+0 records in 00:09:31.925 1+0 records out 00:09:31.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000665471 s, 6.2 MB/s 00:09:31.925 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.925 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:31.925 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.925 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:31.925 11:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:31.925 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.925 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:31.925 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:31.925 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.925 11:16:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:32.184 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:32.184 { 00:09:32.184 "nbd_device": "/dev/nbd0", 00:09:32.184 "bdev_name": "Nvme0n1" 00:09:32.184 }, 00:09:32.184 { 00:09:32.184 "nbd_device": "/dev/nbd1", 00:09:32.184 "bdev_name": "Nvme1n1" 00:09:32.184 }, 00:09:32.184 { 00:09:32.184 "nbd_device": "/dev/nbd10", 00:09:32.184 "bdev_name": "Nvme2n1" 00:09:32.184 }, 00:09:32.184 { 00:09:32.184 "nbd_device": "/dev/nbd11", 00:09:32.184 "bdev_name": "Nvme2n2" 00:09:32.184 }, 00:09:32.184 { 00:09:32.184 "nbd_device": "/dev/nbd12", 00:09:32.184 "bdev_name": "Nvme2n3" 00:09:32.184 }, 00:09:32.184 { 00:09:32.184 "nbd_device": "/dev/nbd13", 00:09:32.184 "bdev_name": "Nvme3n1" 00:09:32.184 } 00:09:32.184 ]' 00:09:32.184 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:32.184 { 00:09:32.184 "nbd_device": "/dev/nbd0", 00:09:32.184 "bdev_name": "Nvme0n1" 00:09:32.184 }, 00:09:32.184 { 00:09:32.184 "nbd_device": "/dev/nbd1", 00:09:32.184 "bdev_name": "Nvme1n1" 00:09:32.184 }, 00:09:32.184 { 00:09:32.184 "nbd_device": "/dev/nbd10", 00:09:32.184 "bdev_name": "Nvme2n1" 00:09:32.184 }, 00:09:32.184 { 00:09:32.184 "nbd_device": "/dev/nbd11", 00:09:32.184 "bdev_name": "Nvme2n2" 00:09:32.184 }, 00:09:32.184 { 00:09:32.184 "nbd_device": "/dev/nbd12", 00:09:32.184 "bdev_name": "Nvme2n3" 00:09:32.184 }, 00:09:32.184 { 00:09:32.184 "nbd_device": "/dev/nbd13", 00:09:32.184 "bdev_name": "Nvme3n1" 00:09:32.184 } 00:09:32.184 ]' 00:09:32.184 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:32.184 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:32.184 /dev/nbd1 00:09:32.184 /dev/nbd10 00:09:32.184 /dev/nbd11 00:09:32.184 /dev/nbd12 00:09:32.184 /dev/nbd13' 00:09:32.184 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:32.184 /dev/nbd1 00:09:32.184 /dev/nbd10 00:09:32.184 /dev/nbd11 00:09:32.184 /dev/nbd12 00:09:32.184 /dev/nbd13' 00:09:32.184 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:32.184 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:09:32.184 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:09:32.184 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:09:32.184 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:32.185 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:32.185 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:32.185 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:32.185 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:32.185 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:32.185 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:32.185 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:32.185 256+0 records in 00:09:32.185 256+0 records out 00:09:32.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120057 s, 87.3 MB/s 00:09:32.185 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.185 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:32.444 256+0 records in 00:09:32.444 256+0 records out 00:09:32.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120392 s, 8.7 MB/s 00:09:32.444 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.444 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:32.444 256+0 records in 00:09:32.444 256+0 records out 00:09:32.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127042 s, 8.3 MB/s 00:09:32.444 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.444 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:32.702 256+0 records in 00:09:32.702 256+0 records out 00:09:32.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124763 s, 8.4 MB/s 00:09:32.702 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.702 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:32.702 256+0 records in 00:09:32.702 256+0 records out 00:09:32.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122671 s, 8.5 MB/s 00:09:32.702 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.702 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:32.962 256+0 records in 00:09:32.962 256+0 records out 00:09:32.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121122 s, 8.7 MB/s 00:09:32.962 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.962 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:32.962 256+0 records in 00:09:32.962 256+0 records out 00:09:32.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123685 s, 8.5 MB/s 00:09:32.962 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:32.962 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:32.962 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:32.962 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:32.962 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:32.962 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:32.962 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:32.962 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.962 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:32.962 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.962 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:32.962 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.962 11:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:32.962 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.962 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:32.962 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.962 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:32.962 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.962 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:32.962 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:32.962 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:32.962 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.962 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:32.962 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:32.962 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:32.962 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.962 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:33.221 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:33.221 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:33.221 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:33.221 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.221 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.222 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:33.222 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.222 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.222 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.222 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:33.481 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:33.481 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:33.481 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:33.481 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.481 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.481 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:33.481 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.481 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.481 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.481 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:33.739 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:33.739 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:33.739 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:33.739 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.739 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.739 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:33.739 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.739 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.739 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.739 11:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:33.997 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:33.997 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:33.997 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:33.997 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.997 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.997 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:33.997 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.997 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.997 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.997 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:34.256 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:34.256 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:34.256 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:34.256 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.256 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.256 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:34.256 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.256 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.256 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.257 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:34.515 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:34.515 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:34.515 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:34.515 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.515 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.515 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:34.515 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.515 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.515 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:34.516 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.516 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:34.774 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:34.774 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:34.774 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:34.774 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:34.774 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:34.774 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:34.774 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:34.774 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:34.774 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:34.774 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:34.774 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:34.774 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:34.774 11:17:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:34.774 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.774 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:34.775 11:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:35.034 malloc_lvol_verify 00:09:35.034 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:35.293 737ad99c-3e15-483e-a7de-f95c6ee963d0 00:09:35.293 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:35.552 94d19883-1a51-42d9-8b99-d315da1167d1 00:09:35.552 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:35.815 /dev/nbd0 00:09:35.815 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:35.815 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:35.815 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:35.815 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:35.815 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:35.815 mke2fs 1.47.0 (5-Feb-2023) 00:09:35.815 Discarding device blocks: 0/4096 done 00:09:35.815 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:35.815 00:09:35.815 Allocating group tables: 0/1 done 00:09:35.815 Writing inode tables: 0/1 done 00:09:35.815 Creating journal (1024 blocks): done 00:09:35.815 Writing superblocks and filesystem accounting information: 0/1 done 00:09:35.815 00:09:35.815 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:35.815 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.815 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:35.815 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:35.815 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:35.815 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.815 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:35.815 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61227 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61227 ']' 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61227 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61227 00:09:36.083 killing process with pid 61227 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61227' 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61227 00:09:36.083 11:17:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61227 00:09:37.459 11:17:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:37.459 00:09:37.459 real 0m11.659s 00:09:37.459 user 0m15.223s 00:09:37.459 sys 0m4.751s 00:09:37.459 11:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.459 ************************************ 00:09:37.459 END TEST bdev_nbd 00:09:37.459 ************************************ 00:09:37.459 11:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:37.459 11:17:04 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:09:37.459 11:17:04 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:09:37.459 skipping fio tests on NVMe due to multi-ns failures. 00:09:37.459 11:17:04 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:37.459 11:17:04 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:37.459 11:17:04 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:37.459 11:17:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:37.459 11:17:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.459 11:17:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:37.459 ************************************ 00:09:37.459 START TEST bdev_verify 00:09:37.459 ************************************ 00:09:37.459 11:17:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:37.459 [2024-12-10 11:17:04.385482] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:37.459 [2024-12-10 11:17:04.385654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61617 ] 00:09:37.459 [2024-12-10 11:17:04.566761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:37.719 [2024-12-10 11:17:04.689194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.719 [2024-12-10 11:17:04.689217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.654 Running I/O for 5 seconds... 00:09:40.547 19200.00 IOPS, 75.00 MiB/s [2024-12-10T11:17:08.599Z] 19584.00 IOPS, 76.50 MiB/s [2024-12-10T11:17:09.977Z] 19434.67 IOPS, 75.92 MiB/s [2024-12-10T11:17:10.544Z] 19920.00 IOPS, 77.81 MiB/s [2024-12-10T11:17:10.803Z] 20262.40 IOPS, 79.15 MiB/s 00:09:43.690 Latency(us) 00:09:43.690 [2024-12-10T11:17:10.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.690 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.690 Verification LBA range: start 0x0 length 0xbd0bd 00:09:43.690 Nvme0n1 : 5.05 1673.07 6.54 0.00 0.00 76268.11 15686.53 85486.32 00:09:43.690 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.690 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:43.690 Nvme0n1 : 5.04 1652.39 6.45 0.00 0.00 77211.90 16107.64 86328.55 00:09:43.690 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.690 Verification LBA range: start 0x0 length 0xa0000 00:09:43.690 Nvme1n1 : 5.05 1672.63 6.53 0.00 0.00 76148.23 14844.30 78327.36 00:09:43.690 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.690 Verification LBA range: start 0xa0000 length 0xa0000 00:09:43.690 Nvme1n1 : 5.06 1656.92 6.47 0.00 0.00 76814.23 6527.28 79169.59 00:09:43.690 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.690 Verification LBA range: start 0x0 length 0x80000 00:09:43.690 Nvme2n1 : 5.05 1672.19 6.53 0.00 0.00 75900.38 14423.18 64009.46 00:09:43.690 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.690 Verification LBA range: start 0x80000 length 0x80000 00:09:43.690 Nvme2n1 : 5.07 1664.73 6.50 0.00 0.00 76339.24 12107.05 64851.69 00:09:43.690 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.690 Verification LBA range: start 0x0 length 0x80000 00:09:43.690 Nvme2n2 : 5.11 1678.93 6.56 0.00 0.00 75566.43 13686.23 66536.15 00:09:43.690 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.690 Verification LBA range: start 0x80000 length 0x80000 00:09:43.690 Nvme2n2 : 5.08 1664.27 6.50 0.00 0.00 76204.84 12370.25 61482.77 00:09:43.690 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.690 Verification LBA range: start 0x0 length 0x80000 00:09:43.690 Nvme2n3 : 5.11 1677.77 6.55 0.00 0.00 75453.55 14212.63 69483.95 00:09:43.690 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.690 Verification LBA range: start 0x80000 length 0x80000 00:09:43.690 Nvme2n3 : 5.08 1663.83 6.50 0.00 0.00 76070.57 11843.86 63588.34 00:09:43.690 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.690 Verification LBA range: start 0x0 length 0x20000 00:09:43.690 Nvme3n1 : 5.12 1676.60 6.55 0.00 0.00 75344.51 12054.41 70326.18 00:09:43.690 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.690 Verification LBA range: start 0x20000 length 0x20000 00:09:43.690 Nvme3n1 : 5.08 1663.47 6.50 0.00 0.00 75938.08 11212.18 65272.80 00:09:43.690 [2024-12-10T11:17:10.804Z] =================================================================================================================== 00:09:43.690 [2024-12-10T11:17:10.804Z] Total : 20016.78 78.19 0.00 0.00 76100.71 6527.28 86328.55 00:09:45.069 00:09:45.069 real 0m7.702s 00:09:45.069 user 0m14.249s 00:09:45.069 sys 0m0.305s 00:09:45.069 11:17:11 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.069 ************************************ 00:09:45.069 END TEST bdev_verify 00:09:45.069 ************************************ 00:09:45.069 11:17:11 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:45.069 11:17:12 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:45.069 11:17:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:45.069 11:17:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.069 11:17:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:45.069 ************************************ 00:09:45.069 START TEST bdev_verify_big_io 00:09:45.069 ************************************ 00:09:45.069 11:17:12 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:45.069 [2024-12-10 11:17:12.160215] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:45.069 [2024-12-10 11:17:12.160476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61721 ] 00:09:45.328 [2024-12-10 11:17:12.340602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:45.587 [2024-12-10 11:17:12.464813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.587 [2024-12-10 11:17:12.464842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.525 Running I/O for 5 seconds... 00:09:51.592 1960.00 IOPS, 122.50 MiB/s [2024-12-10T11:17:19.275Z] 3063.50 IOPS, 191.47 MiB/s [2024-12-10T11:17:19.275Z] 3423.00 IOPS, 213.94 MiB/s 00:09:52.161 Latency(us) 00:09:52.161 [2024-12-10T11:17:19.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.161 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.161 Verification LBA range: start 0x0 length 0xbd0b 00:09:52.161 Nvme0n1 : 5.64 147.39 9.21 0.00 0.00 836397.00 34110.30 822016.21 00:09:52.161 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.161 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:52.161 Nvme0n1 : 5.73 156.30 9.77 0.00 0.00 804364.20 19897.68 842229.72 00:09:52.161 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.161 Verification LBA range: start 0x0 length 0xa000 00:09:52.161 Nvme1n1 : 5.72 152.97 9.56 0.00 0.00 797680.46 33689.19 737793.23 00:09:52.161 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.161 Verification LBA range: start 0xa000 length 0xa000 00:09:52.161 Nvme1n1 : 5.73 152.43 9.53 0.00 0.00 798602.73 37058.11 717579.72 00:09:52.161 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.161 Verification LBA range: start 0x0 length 0x8000 00:09:52.161 Nvme2n1 : 5.72 152.89 9.56 0.00 0.00 778091.88 32846.96 754637.83 00:09:52.161 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.161 Verification LBA range: start 0x8000 length 0x8000 00:09:52.161 Nvme2n1 : 5.73 152.19 9.51 0.00 0.00 778737.00 36636.99 737793.23 00:09:52.161 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.161 Verification LBA range: start 0x0 length 0x8000 00:09:52.161 Nvme2n2 : 5.72 152.17 9.51 0.00 0.00 760911.64 32636.40 768113.50 00:09:52.161 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.161 Verification LBA range: start 0x8000 length 0x8000 00:09:52.161 Nvme2n2 : 5.73 152.51 9.53 0.00 0.00 756896.14 36215.88 754637.83 00:09:52.161 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.161 Verification LBA range: start 0x0 length 0x8000 00:09:52.161 Nvme2n3 : 5.73 157.19 9.82 0.00 0.00 722031.46 37479.22 784958.10 00:09:52.161 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.161 Verification LBA range: start 0x8000 length 0x8000 00:09:52.161 Nvme2n3 : 5.73 156.29 9.77 0.00 0.00 722862.22 40216.47 771482.42 00:09:52.161 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.161 Verification LBA range: start 0x0 length 0x2000 00:09:52.161 Nvme3n1 : 5.74 166.47 10.40 0.00 0.00 666214.73 2237.17 848967.56 00:09:52.161 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.161 Verification LBA range: start 0x2000 length 0x2000 00:09:52.161 Nvme3n1 : 5.77 173.65 10.85 0.00 0.00 635427.72 2013.46 788327.02 00:09:52.161 [2024-12-10T11:17:19.275Z] =================================================================================================================== 00:09:52.161 [2024-12-10T11:17:19.275Z] Total : 1872.43 117.03 0.00 0.00 752381.09 2013.46 848967.56 00:09:54.067 00:09:54.067 real 0m8.858s 00:09:54.067 user 0m16.527s 00:09:54.067 sys 0m0.338s 00:09:54.067 11:17:20 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.067 ************************************ 00:09:54.067 END TEST bdev_verify_big_io 00:09:54.067 ************************************ 00:09:54.067 11:17:20 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:54.067 11:17:20 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:54.067 11:17:20 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:54.067 11:17:20 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.067 11:17:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:54.067 ************************************ 00:09:54.067 START TEST bdev_write_zeroes 00:09:54.067 ************************************ 00:09:54.067 11:17:21 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:54.067 [2024-12-10 11:17:21.094822] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:54.067 [2024-12-10 11:17:21.094962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61835 ] 00:09:54.326 [2024-12-10 11:17:21.274670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.326 [2024-12-10 11:17:21.391681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.261 Running I/O for 1 seconds... 00:09:56.196 71808.00 IOPS, 280.50 MiB/s 00:09:56.196 Latency(us) 00:09:56.196 [2024-12-10T11:17:23.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.196 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:56.196 Nvme0n1 : 1.02 11939.23 46.64 0.00 0.00 10690.68 8738.13 32004.73 00:09:56.196 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:56.196 Nvme1n1 : 1.02 11927.05 46.59 0.00 0.00 10688.86 8948.69 32846.96 00:09:56.196 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:56.196 Nvme2n1 : 1.02 11964.28 46.74 0.00 0.00 10618.57 5948.25 28846.37 00:09:56.196 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:56.196 Nvme2n2 : 1.02 11952.74 46.69 0.00 0.00 10606.30 6211.44 28846.37 00:09:56.196 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:56.196 Nvme2n3 : 1.02 11941.13 46.65 0.00 0.00 10566.12 6369.36 25582.73 00:09:56.196 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:56.196 Nvme3n1 : 1.02 11930.09 46.60 0.00 0.00 10531.18 6500.96 22740.20 00:09:56.196 [2024-12-10T11:17:23.310Z] =================================================================================================================== 00:09:56.196 [2024-12-10T11:17:23.310Z] Total : 71654.53 279.90 0.00 0.00 10616.83 5948.25 32846.96 00:09:57.579 00:09:57.579 real 0m3.293s 00:09:57.579 user 0m2.915s 00:09:57.579 sys 0m0.263s 00:09:57.579 ************************************ 00:09:57.579 END TEST bdev_write_zeroes 00:09:57.579 ************************************ 00:09:57.579 11:17:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.579 11:17:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:57.579 11:17:24 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:57.579 11:17:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:57.579 11:17:24 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.579 11:17:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:57.579 ************************************ 00:09:57.579 START TEST bdev_json_nonenclosed 00:09:57.579 ************************************ 00:09:57.579 11:17:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:57.579 [2024-12-10 11:17:24.466283] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:57.579 [2024-12-10 11:17:24.466408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61894 ] 00:09:57.579 [2024-12-10 11:17:24.648318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.837 [2024-12-10 11:17:24.756680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.837 [2024-12-10 11:17:24.756781] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:57.837 [2024-12-10 11:17:24.756803] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:57.837 [2024-12-10 11:17:24.756816] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:58.095 00:09:58.096 real 0m0.644s 00:09:58.096 user 0m0.393s 00:09:58.096 sys 0m0.148s 00:09:58.096 11:17:25 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.096 11:17:25 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:58.096 ************************************ 00:09:58.096 END TEST bdev_json_nonenclosed 00:09:58.096 ************************************ 00:09:58.096 11:17:25 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:58.096 11:17:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:58.096 11:17:25 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.096 11:17:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:58.096 ************************************ 00:09:58.096 START TEST bdev_json_nonarray 00:09:58.096 ************************************ 00:09:58.096 11:17:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:58.096 [2024-12-10 11:17:25.198998] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:58.096 [2024-12-10 11:17:25.199123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61914 ] 00:09:58.354 [2024-12-10 11:17:25.382647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.613 [2024-12-10 11:17:25.493957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.613 [2024-12-10 11:17:25.494063] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:58.613 [2024-12-10 11:17:25.494088] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:58.613 [2024-12-10 11:17:25.494100] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:58.872 00:09:58.872 real 0m0.657s 00:09:58.872 user 0m0.401s 00:09:58.872 sys 0m0.151s 00:09:58.872 11:17:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.872 11:17:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:58.872 ************************************ 00:09:58.872 END TEST bdev_json_nonarray 00:09:58.872 ************************************ 00:09:58.872 11:17:25 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:09:58.872 11:17:25 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:09:58.872 11:17:25 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:09:58.872 11:17:25 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:09:58.872 11:17:25 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:09:58.872 11:17:25 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:58.872 11:17:25 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:58.872 11:17:25 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:58.872 11:17:25 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:58.872 11:17:25 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:58.872 11:17:25 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:58.872 00:09:58.872 real 0m43.219s 00:09:58.872 user 1m3.675s 00:09:58.872 sys 0m7.865s 00:09:58.872 11:17:25 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.872 ************************************ 00:09:58.872 END TEST blockdev_nvme 00:09:58.872 ************************************ 00:09:58.872 11:17:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:58.872 11:17:25 -- spdk/autotest.sh@209 -- # uname -s 00:09:58.872 11:17:25 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:09:58.872 11:17:25 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:58.872 11:17:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.872 11:17:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.872 11:17:25 -- common/autotest_common.sh@10 -- # set +x 00:09:58.872 ************************************ 00:09:58.872 START TEST blockdev_nvme_gpt 00:09:58.872 ************************************ 00:09:58.872 11:17:25 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:59.132 * Looking for test storage... 00:09:59.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:59.132 11:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:59.132 11:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:09:59.132 11:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:59.132 11:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.132 11:17:26 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:09:59.132 11:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.132 11:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:59.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.132 --rc genhtml_branch_coverage=1 00:09:59.132 --rc genhtml_function_coverage=1 00:09:59.132 --rc genhtml_legend=1 00:09:59.132 --rc geninfo_all_blocks=1 00:09:59.132 --rc geninfo_unexecuted_blocks=1 00:09:59.132 00:09:59.132 ' 00:09:59.132 11:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:59.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.132 --rc genhtml_branch_coverage=1 00:09:59.132 --rc genhtml_function_coverage=1 00:09:59.132 --rc genhtml_legend=1 00:09:59.132 --rc geninfo_all_blocks=1 00:09:59.132 --rc geninfo_unexecuted_blocks=1 00:09:59.132 00:09:59.132 ' 00:09:59.132 11:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:59.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.132 --rc genhtml_branch_coverage=1 00:09:59.132 --rc genhtml_function_coverage=1 00:09:59.132 --rc genhtml_legend=1 00:09:59.132 --rc geninfo_all_blocks=1 00:09:59.132 --rc geninfo_unexecuted_blocks=1 00:09:59.132 00:09:59.132 ' 00:09:59.132 11:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:59.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.132 --rc genhtml_branch_coverage=1 00:09:59.132 --rc genhtml_function_coverage=1 00:09:59.132 --rc genhtml_legend=1 00:09:59.132 --rc geninfo_all_blocks=1 00:09:59.132 --rc geninfo_unexecuted_blocks=1 00:09:59.132 00:09:59.132 ' 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:09:59.132 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:09:59.133 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:59.133 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61999 00:09:59.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.133 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:59.133 11:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61999 00:09:59.133 11:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61999 ']' 00:09:59.133 11:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.133 11:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.133 11:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.133 11:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.133 11:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:59.392 [2024-12-10 11:17:26.300411] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:59.392 [2024-12-10 11:17:26.300733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61999 ] 00:09:59.392 [2024-12-10 11:17:26.484311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.651 [2024-12-10 11:17:26.600366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.587 11:17:27 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.587 11:17:27 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:10:00.587 11:17:27 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:10:00.587 11:17:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:10:00.587 11:17:27 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:01.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:01.413 Waiting for block devices as requested 00:10:01.413 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:01.413 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:01.671 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:01.671 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.019 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:10:07.019 BYT; 00:10:07.019 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:10:07.019 BYT; 00:10:07.019 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:07.019 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:07.019 11:17:33 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:07.020 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:07.020 11:17:33 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:10:07.957 The operation has completed successfully. 00:10:07.957 11:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:10:08.892 The operation has completed successfully. 00:10:09.150 11:17:36 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:09.718 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:10.323 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:10.323 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:10.583 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:10.583 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:10.583 11:17:37 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:10:10.583 11:17:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.583 11:17:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:10.583 [] 00:10:10.583 11:17:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.583 11:17:37 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:10:10.583 11:17:37 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:10:10.583 11:17:37 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:10.583 11:17:37 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:10.842 11:17:37 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:10.842 11:17:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.842 11:17:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.100 11:17:38 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.100 11:17:38 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:10:11.100 11:17:38 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.100 11:17:38 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.100 11:17:38 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.100 11:17:38 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:10:11.100 11:17:38 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.100 11:17:38 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:11.100 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.359 11:17:38 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:10:11.359 11:17:38 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:10:11.360 11:17:38 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "5ea34a94-f81d-4a28-9665-398e4e0d2603"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "5ea34a94-f81d-4a28-9665-398e4e0d2603",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "38a64647-78bd-4029-b895-120dbed08365"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "38a64647-78bd-4029-b895-120dbed08365",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "52ac29b1-2dec-4810-aefc-c008084455d5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "52ac29b1-2dec-4810-aefc-c008084455d5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ddc3a21f-72be-4842-9944-67a9d22e8225"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ddc3a21f-72be-4842-9944-67a9d22e8225",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2605b383-b930-4a2e-b23f-857682e3f077"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2605b383-b930-4a2e-b23f-857682e3f077",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:11.360 11:17:38 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:10:11.360 11:17:38 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:10:11.360 11:17:38 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:10:11.360 11:17:38 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 61999 00:10:11.360 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61999 ']' 00:10:11.360 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61999 00:10:11.360 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:10:11.360 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.360 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61999 00:10:11.360 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.360 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.360 killing process with pid 61999 00:10:11.360 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61999' 00:10:11.360 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61999 00:10:11.360 11:17:38 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61999 00:10:13.899 11:17:40 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:13.899 11:17:40 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:13.899 11:17:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:13.899 11:17:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.899 11:17:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:13.899 ************************************ 00:10:13.899 START TEST bdev_hello_world 00:10:13.899 ************************************ 00:10:13.899 11:17:40 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:13.899 [2024-12-10 11:17:40.815218] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:13.899 [2024-12-10 11:17:40.815334] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62647 ] 00:10:13.899 [2024-12-10 11:17:40.995113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.159 [2024-12-10 11:17:41.112364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.767 [2024-12-10 11:17:41.774280] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:14.767 [2024-12-10 11:17:41.774333] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:14.767 [2024-12-10 11:17:41.774360] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:14.767 [2024-12-10 11:17:41.777378] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:14.767 [2024-12-10 11:17:41.778017] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:14.767 [2024-12-10 11:17:41.778052] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:14.767 [2024-12-10 11:17:41.778277] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:14.767 00:10:14.767 [2024-12-10 11:17:41.778306] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:16.147 00:10:16.147 real 0m2.170s 00:10:16.147 user 0m1.831s 00:10:16.147 sys 0m0.231s 00:10:16.147 11:17:42 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.147 11:17:42 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:16.147 ************************************ 00:10:16.147 END TEST bdev_hello_world 00:10:16.147 ************************************ 00:10:16.147 11:17:42 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:10:16.147 11:17:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.147 11:17:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.147 11:17:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:16.147 ************************************ 00:10:16.147 START TEST bdev_bounds 00:10:16.147 ************************************ 00:10:16.147 11:17:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:16.147 11:17:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62695 00:10:16.147 11:17:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:16.147 11:17:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:16.147 Process bdevio pid: 62695 00:10:16.147 11:17:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62695' 00:10:16.147 11:17:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62695 00:10:16.147 11:17:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62695 ']' 00:10:16.147 11:17:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.147 11:17:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.147 11:17:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.147 11:17:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.147 11:17:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:16.147 [2024-12-10 11:17:43.072115] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:16.147 [2024-12-10 11:17:43.072249] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62695 ] 00:10:16.147 [2024-12-10 11:17:43.245193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.407 [2024-12-10 11:17:43.361747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.407 [2024-12-10 11:17:43.361775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.407 [2024-12-10 11:17:43.361782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.977 11:17:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.977 11:17:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:16.977 11:17:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:17.236 I/O targets: 00:10:17.236 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:17.236 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:10:17.236 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:10:17.236 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:17.236 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:17.236 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:17.236 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:17.236 00:10:17.236 00:10:17.236 CUnit - A unit testing framework for C - Version 2.1-3 00:10:17.236 http://cunit.sourceforge.net/ 00:10:17.236 00:10:17.236 00:10:17.236 Suite: bdevio tests on: Nvme3n1 00:10:17.236 Test: blockdev write read block ...passed 00:10:17.236 Test: blockdev write zeroes read block ...passed 00:10:17.236 Test: blockdev write zeroes read no split ...passed 00:10:17.236 Test: blockdev write zeroes read split ...passed 00:10:17.236 Test: blockdev write zeroes read split partial ...passed 00:10:17.236 Test: blockdev reset ...[2024-12-10 11:17:44.234904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:17.236 [2024-12-10 11:17:44.238974] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:17.236 passed 00:10:17.236 Test: blockdev write read 8 blocks ...passed 00:10:17.236 Test: blockdev write read size > 128k ...passed 00:10:17.236 Test: blockdev write read invalid size ...passed 00:10:17.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.236 Test: blockdev write read max offset ...passed 00:10:17.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.236 Test: blockdev writev readv 8 blocks ...passed 00:10:17.236 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.236 Test: blockdev writev readv block ...passed 00:10:17.236 Test: blockdev writev readv size > 128k ...passed 00:10:17.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.236 Test: blockdev comparev and writev ...[2024-12-10 11:17:44.247963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7a04000 len:0x1000 00:10:17.236 [2024-12-10 11:17:44.248015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:17.236 passed 00:10:17.236 Test: blockdev nvme passthru rw ...passed 00:10:17.236 Test: blockdev nvme passthru vendor specific ...passed 00:10:17.236 Test: blockdev nvme admin passthru ...[2024-12-10 11:17:44.248860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:17.236 [2024-12-10 11:17:44.248897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:17.236 passed 00:10:17.236 Test: blockdev copy ...passed 00:10:17.236 Suite: bdevio tests on: Nvme2n3 00:10:17.236 Test: blockdev write read block ...passed 00:10:17.236 Test: blockdev write zeroes read block ...passed 00:10:17.236 Test: blockdev write zeroes read no split ...passed 00:10:17.236 Test: blockdev write zeroes read split ...passed 00:10:17.236 Test: blockdev write zeroes read split partial ...passed 00:10:17.236 Test: blockdev reset ...[2024-12-10 11:17:44.328298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:17.236 passed 00:10:17.236 Test: blockdev write read 8 blocks ...[2024-12-10 11:17:44.332528] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:17.236 passed 00:10:17.236 Test: blockdev write read size > 128k ...passed 00:10:17.236 Test: blockdev write read invalid size ...passed 00:10:17.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.236 Test: blockdev write read max offset ...passed 00:10:17.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.236 Test: blockdev writev readv 8 blocks ...passed 00:10:17.236 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.236 Test: blockdev writev readv block ...passed 00:10:17.236 Test: blockdev writev readv size > 128k ...passed 00:10:17.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.236 Test: blockdev comparev and writev ...[2024-12-10 11:17:44.340883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7a02000 len:0x1000 00:10:17.236 [2024-12-10 11:17:44.340946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:17.236 passed 00:10:17.236 Test: blockdev nvme passthru rw ...passed 00:10:17.236 Test: blockdev nvme passthru vendor specific ...passed 00:10:17.236 Test: blockdev nvme admin passthru ...[2024-12-10 11:17:44.341763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:17.236 [2024-12-10 11:17:44.341799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:17.496 passed 00:10:17.496 Test: blockdev copy ...passed 00:10:17.496 Suite: bdevio tests on: Nvme2n2 00:10:17.496 Test: blockdev write read block ...passed 00:10:17.496 Test: blockdev write zeroes read block ...passed 00:10:17.496 Test: blockdev write zeroes read no split ...passed 00:10:17.496 Test: blockdev write zeroes read split ...passed 00:10:17.496 Test: blockdev write zeroes read split partial ...passed 00:10:17.496 Test: blockdev reset ...[2024-12-10 11:17:44.423395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:17.496 passed 00:10:17.496 Test: blockdev write read 8 blocks ...[2024-12-10 11:17:44.427600] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:17.496 passed 00:10:17.496 Test: blockdev write read size > 128k ...passed 00:10:17.496 Test: blockdev write read invalid size ...passed 00:10:17.496 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.496 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.496 Test: blockdev write read max offset ...passed 00:10:17.496 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.496 Test: blockdev writev readv 8 blocks ...passed 00:10:17.496 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.496 Test: blockdev writev readv block ...passed 00:10:17.496 Test: blockdev writev readv size > 128k ...passed 00:10:17.496 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.496 Test: blockdev comparev and writev ...[2024-12-10 11:17:44.435994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cb838000 len:0x1000 00:10:17.496 [2024-12-10 11:17:44.436046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:17.496 passed 00:10:17.496 Test: blockdev nvme passthru rw ...passed 00:10:17.496 Test: blockdev nvme passthru vendor specific ...passed 00:10:17.497 Test: blockdev nvme admin passthru ...[2024-12-10 11:17:44.436873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:17.497 [2024-12-10 11:17:44.436907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:17.497 passed 00:10:17.497 Test: blockdev copy ...passed 00:10:17.497 Suite: bdevio tests on: Nvme2n1 00:10:17.497 Test: blockdev write read block ...passed 00:10:17.497 Test: blockdev write zeroes read block ...passed 00:10:17.497 Test: blockdev write zeroes read no split ...passed 00:10:17.497 Test: blockdev write zeroes read split ...passed 00:10:17.497 Test: blockdev write zeroes read split partial ...passed 00:10:17.497 Test: blockdev reset ...[2024-12-10 11:17:44.515525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:17.497 passed 00:10:17.497 Test: blockdev write read 8 blocks ...[2024-12-10 11:17:44.519755] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:17.497 passed 00:10:17.497 Test: blockdev write read size > 128k ...passed 00:10:17.497 Test: blockdev write read invalid size ...passed 00:10:17.497 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.497 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.497 Test: blockdev write read max offset ...passed 00:10:17.497 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.497 Test: blockdev writev readv 8 blocks ...passed 00:10:17.497 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.497 Test: blockdev writev readv block ...passed 00:10:17.497 Test: blockdev writev readv size > 128k ...passed 00:10:17.497 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.497 Test: blockdev comparev and writev ...[2024-12-10 11:17:44.528217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cb834000 len:0x1000 00:10:17.497 [2024-12-10 11:17:44.528271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:17.497 passed 00:10:17.497 Test: blockdev nvme passthru rw ...passed 00:10:17.497 Test: blockdev nvme passthru vendor specific ...passed 00:10:17.497 Test: blockdev nvme admin passthru ...[2024-12-10 11:17:44.529118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:17.497 [2024-12-10 11:17:44.529155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:17.497 passed 00:10:17.497 Test: blockdev copy ...passed 00:10:17.497 Suite: bdevio tests on: Nvme1n1p2 00:10:17.497 Test: blockdev write read block ...passed 00:10:17.497 Test: blockdev write zeroes read block ...passed 00:10:17.497 Test: blockdev write zeroes read no split ...passed 00:10:17.497 Test: blockdev write zeroes read split ...passed 00:10:17.756 Test: blockdev write zeroes read split partial ...passed 00:10:17.756 Test: blockdev reset ...[2024-12-10 11:17:44.609307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:17.756 [2024-12-10 11:17:44.613251] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:17.756 passed 00:10:17.756 Test: blockdev write read 8 blocks ...passed 00:10:17.756 Test: blockdev write read size > 128k ...passed 00:10:17.756 Test: blockdev write read invalid size ...passed 00:10:17.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.756 Test: blockdev write read max offset ...passed 00:10:17.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.756 Test: blockdev writev readv 8 blocks ...passed 00:10:17.756 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.756 Test: blockdev writev readv block ...passed 00:10:17.756 Test: blockdev writev readv size > 128k ...passed 00:10:17.756 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.756 Test: blockdev comparev and writev ...[2024-12-10 11:17:44.622086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cb830000 len:0x1000 00:10:17.756 [2024-12-10 11:17:44.622134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:17.756 passed 00:10:17.756 Test: blockdev nvme passthru rw ...passed 00:10:17.756 Test: blockdev nvme passthru vendor specific ...passed 00:10:17.756 Test: blockdev nvme admin passthru ...passed 00:10:17.756 Test: blockdev copy ...passed 00:10:17.756 Suite: bdevio tests on: Nvme1n1p1 00:10:17.756 Test: blockdev write read block ...passed 00:10:17.756 Test: blockdev write zeroes read block ...passed 00:10:17.756 Test: blockdev write zeroes read no split ...passed 00:10:17.757 Test: blockdev write zeroes read split ...passed 00:10:17.757 Test: blockdev write zeroes read split partial ...passed 00:10:17.757 Test: blockdev reset ...[2024-12-10 11:17:44.691454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:17.757 [2024-12-10 11:17:44.695207] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:17.757 passed 00:10:17.757 Test: blockdev write read 8 blocks ...passed 00:10:17.757 Test: blockdev write read size > 128k ...passed 00:10:17.757 Test: blockdev write read invalid size ...passed 00:10:17.757 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.757 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.757 Test: blockdev write read max offset ...passed 00:10:17.757 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.757 Test: blockdev writev readv 8 blocks ...passed 00:10:17.757 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.757 Test: blockdev writev readv block ...passed 00:10:17.757 Test: blockdev writev readv size > 128k ...passed 00:10:17.757 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.757 Test: blockdev comparev and writev ...[2024-12-10 11:17:44.703941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b7c0e000 len:0x1000 00:10:17.757 [2024-12-10 11:17:44.703989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:17.757 passed 00:10:17.757 Test: blockdev nvme passthru rw ...passed 00:10:17.757 Test: blockdev nvme passthru vendor specific ...passed 00:10:17.757 Test: blockdev nvme admin passthru ...passed 00:10:17.757 Test: blockdev copy ...passed 00:10:17.757 Suite: bdevio tests on: Nvme0n1 00:10:17.757 Test: blockdev write read block ...passed 00:10:17.757 Test: blockdev write zeroes read block ...passed 00:10:17.757 Test: blockdev write zeroes read no split ...passed 00:10:17.757 Test: blockdev write zeroes read split ...passed 00:10:17.757 Test: blockdev write zeroes read split partial ...passed 00:10:17.757 Test: blockdev reset ...[2024-12-10 11:17:44.772608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:17.757 passed 00:10:17.757 Test: blockdev write read 8 blocks ...[2024-12-10 11:17:44.776335] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:17.757 passed 00:10:17.757 Test: blockdev write read size > 128k ...passed 00:10:17.757 Test: blockdev write read invalid size ...passed 00:10:17.757 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.757 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.757 Test: blockdev write read max offset ...passed 00:10:17.757 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.757 Test: blockdev writev readv 8 blocks ...passed 00:10:17.757 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.757 Test: blockdev writev readv block ...passed 00:10:17.757 Test: blockdev writev readv size > 128k ...passed 00:10:17.757 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.757 Test: blockdev comparev and writev ...[2024-12-10 11:17:44.783771] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:17.757 separate metadata which is not supported yet. 00:10:17.757 passed 00:10:17.757 Test: blockdev nvme passthru rw ...passed 00:10:17.757 Test: blockdev nvme passthru vendor specific ...passed 00:10:17.757 Test: blockdev nvme admin passthru ...[2024-12-10 11:17:44.784349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:17.757 [2024-12-10 11:17:44.784394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:17.757 passed 00:10:17.757 Test: blockdev copy ...passed 00:10:17.757 00:10:17.757 Run Summary: Type Total Ran Passed Failed Inactive 00:10:17.757 suites 7 7 n/a 0 0 00:10:17.757 tests 161 161 161 0 0 00:10:17.757 asserts 1025 1025 1025 0 n/a 00:10:17.757 00:10:17.757 Elapsed time = 1.703 seconds 00:10:17.757 0 00:10:17.757 11:17:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62695 00:10:17.757 11:17:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62695 ']' 00:10:17.757 11:17:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62695 00:10:17.757 11:17:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:17.757 11:17:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.757 11:17:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62695 00:10:17.757 11:17:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.016 11:17:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.016 killing process with pid 62695 00:10:18.016 11:17:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62695' 00:10:18.016 11:17:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62695 00:10:18.016 11:17:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62695 00:10:18.957 11:17:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:18.957 00:10:18.957 real 0m2.944s 00:10:18.957 user 0m7.610s 00:10:18.957 sys 0m0.398s 00:10:18.957 11:17:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.957 11:17:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:18.957 ************************************ 00:10:18.957 END TEST bdev_bounds 00:10:18.957 ************************************ 00:10:18.957 11:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:18.957 11:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:18.957 11:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.957 11:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:18.957 ************************************ 00:10:18.957 START TEST bdev_nbd 00:10:18.957 ************************************ 00:10:18.957 11:17:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:18.957 11:17:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62760 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62760 /var/tmp/spdk-nbd.sock 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62760 ']' 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.957 11:17:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:19.219 [2024-12-10 11:17:46.102058] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:19.219 [2024-12-10 11:17:46.102224] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.219 [2024-12-10 11:17:46.276574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.479 [2024-12-10 11:17:46.388922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.048 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.048 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:20.048 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:20.048 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.048 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:20.048 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:20.048 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:20.048 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.048 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:20.048 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:20.048 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:20.048 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:20.048 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:20.048 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:20.048 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:20.308 1+0 records in 00:10:20.308 1+0 records out 00:10:20.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628065 s, 6.5 MB/s 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:20.308 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:20.568 1+0 records in 00:10:20.568 1+0 records out 00:10:20.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700397 s, 5.8 MB/s 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:20.568 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:20.828 1+0 records in 00:10:20.828 1+0 records out 00:10:20.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000770624 s, 5.3 MB/s 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:20.828 11:17:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:21.088 1+0 records in 00:10:21.088 1+0 records out 00:10:21.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070528 s, 5.8 MB/s 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:21.088 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:21.348 1+0 records in 00:10:21.348 1+0 records out 00:10:21.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000791387 s, 5.2 MB/s 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:21.348 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:21.608 1+0 records in 00:10:21.608 1+0 records out 00:10:21.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805801 s, 5.1 MB/s 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:21.608 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:21.910 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:21.910 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:21.910 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:21.910 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:10:21.910 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:21.910 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:21.910 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:21.910 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:10:21.910 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:21.910 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:21.910 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:21.910 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:21.910 1+0 records in 00:10:21.910 1+0 records out 00:10:21.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000782304 s, 5.2 MB/s 00:10:21.910 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:21.910 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:21.911 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:21.911 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:21.911 11:17:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:21.911 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:21.911 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:21.911 11:17:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:22.188 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:22.188 { 00:10:22.188 "nbd_device": "/dev/nbd0", 00:10:22.188 "bdev_name": "Nvme0n1" 00:10:22.188 }, 00:10:22.188 { 00:10:22.188 "nbd_device": "/dev/nbd1", 00:10:22.188 "bdev_name": "Nvme1n1p1" 00:10:22.188 }, 00:10:22.188 { 00:10:22.188 "nbd_device": "/dev/nbd2", 00:10:22.188 "bdev_name": "Nvme1n1p2" 00:10:22.188 }, 00:10:22.188 { 00:10:22.188 "nbd_device": "/dev/nbd3", 00:10:22.188 "bdev_name": "Nvme2n1" 00:10:22.188 }, 00:10:22.188 { 00:10:22.188 "nbd_device": "/dev/nbd4", 00:10:22.188 "bdev_name": "Nvme2n2" 00:10:22.188 }, 00:10:22.188 { 00:10:22.188 "nbd_device": "/dev/nbd5", 00:10:22.189 "bdev_name": "Nvme2n3" 00:10:22.189 }, 00:10:22.189 { 00:10:22.189 "nbd_device": "/dev/nbd6", 00:10:22.189 "bdev_name": "Nvme3n1" 00:10:22.189 } 00:10:22.189 ]' 00:10:22.189 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:22.189 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:22.189 { 00:10:22.189 "nbd_device": "/dev/nbd0", 00:10:22.189 "bdev_name": "Nvme0n1" 00:10:22.189 }, 00:10:22.189 { 00:10:22.189 "nbd_device": "/dev/nbd1", 00:10:22.189 "bdev_name": "Nvme1n1p1" 00:10:22.189 }, 00:10:22.189 { 00:10:22.189 "nbd_device": "/dev/nbd2", 00:10:22.189 "bdev_name": "Nvme1n1p2" 00:10:22.189 }, 00:10:22.189 { 00:10:22.189 "nbd_device": "/dev/nbd3", 00:10:22.189 "bdev_name": "Nvme2n1" 00:10:22.189 }, 00:10:22.189 { 00:10:22.189 "nbd_device": "/dev/nbd4", 00:10:22.189 "bdev_name": "Nvme2n2" 00:10:22.189 }, 00:10:22.189 { 00:10:22.189 "nbd_device": "/dev/nbd5", 00:10:22.189 "bdev_name": "Nvme2n3" 00:10:22.189 }, 00:10:22.189 { 00:10:22.189 "nbd_device": "/dev/nbd6", 00:10:22.189 "bdev_name": "Nvme3n1" 00:10:22.189 } 00:10:22.189 ]' 00:10:22.189 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:22.189 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:22.189 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.189 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:22.189 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:22.189 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:22.189 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.189 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:22.448 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:22.448 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:22.448 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:22.448 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:22.448 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:22.448 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:22.448 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:22.448 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:22.448 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.448 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:22.707 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:22.707 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.708 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:22.967 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:22.967 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:22.967 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:22.967 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:22.967 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:22.967 11:17:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:22.967 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:22.967 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:22.967 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.967 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:23.226 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:23.226 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:23.226 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:23.226 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:23.226 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:23.226 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:23.226 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:23.226 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:23.226 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:23.226 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:23.485 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:23.485 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:23.485 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:23.485 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:23.485 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:23.485 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:23.485 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:23.485 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:23.485 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:23.485 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:23.743 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:23.743 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:23.743 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:23.743 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:23.743 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:23.743 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:23.743 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:23.743 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:23.743 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:23.743 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:23.744 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:24.003 11:17:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:24.262 /dev/nbd0 00:10:24.262 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:24.263 1+0 records in 00:10:24.263 1+0 records out 00:10:24.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390763 s, 10.5 MB/s 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:24.263 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:10:24.521 /dev/nbd1 00:10:24.521 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:24.521 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:24.521 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:24.521 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:24.521 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:24.521 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:24.521 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:24.521 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:24.521 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:24.521 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:24.521 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:24.521 1+0 records in 00:10:24.521 1+0 records out 00:10:24.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00069215 s, 5.9 MB/s 00:10:24.521 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.521 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:24.521 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.521 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:24.522 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:24.522 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:24.522 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:24.522 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:10:24.781 /dev/nbd10 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:24.781 1+0 records in 00:10:24.781 1+0 records out 00:10:24.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684336 s, 6.0 MB/s 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:24.781 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:25.040 /dev/nbd11 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:25.040 1+0 records in 00:10:25.040 1+0 records out 00:10:25.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00248671 s, 1.6 MB/s 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:25.040 11:17:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:25.300 /dev/nbd12 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:25.300 1+0 records in 00:10:25.300 1+0 records out 00:10:25.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732811 s, 5.6 MB/s 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:25.300 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:25.559 /dev/nbd13 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:25.559 1+0 records in 00:10:25.559 1+0 records out 00:10:25.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000985469 s, 4.2 MB/s 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:25.559 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:25.560 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:25.560 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:25.819 /dev/nbd14 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:25.819 1+0 records in 00:10:25.819 1+0 records out 00:10:25.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648783 s, 6.3 MB/s 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:25.819 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:26.078 { 00:10:26.078 "nbd_device": "/dev/nbd0", 00:10:26.078 "bdev_name": "Nvme0n1" 00:10:26.078 }, 00:10:26.078 { 00:10:26.078 "nbd_device": "/dev/nbd1", 00:10:26.078 "bdev_name": "Nvme1n1p1" 00:10:26.078 }, 00:10:26.078 { 00:10:26.078 "nbd_device": "/dev/nbd10", 00:10:26.078 "bdev_name": "Nvme1n1p2" 00:10:26.078 }, 00:10:26.078 { 00:10:26.078 "nbd_device": "/dev/nbd11", 00:10:26.078 "bdev_name": "Nvme2n1" 00:10:26.078 }, 00:10:26.078 { 00:10:26.078 "nbd_device": "/dev/nbd12", 00:10:26.078 "bdev_name": "Nvme2n2" 00:10:26.078 }, 00:10:26.078 { 00:10:26.078 "nbd_device": "/dev/nbd13", 00:10:26.078 "bdev_name": "Nvme2n3" 00:10:26.078 }, 00:10:26.078 { 00:10:26.078 "nbd_device": "/dev/nbd14", 00:10:26.078 "bdev_name": "Nvme3n1" 00:10:26.078 } 00:10:26.078 ]' 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:26.078 { 00:10:26.078 "nbd_device": "/dev/nbd0", 00:10:26.078 "bdev_name": "Nvme0n1" 00:10:26.078 }, 00:10:26.078 { 00:10:26.078 "nbd_device": "/dev/nbd1", 00:10:26.078 "bdev_name": "Nvme1n1p1" 00:10:26.078 }, 00:10:26.078 { 00:10:26.078 "nbd_device": "/dev/nbd10", 00:10:26.078 "bdev_name": "Nvme1n1p2" 00:10:26.078 }, 00:10:26.078 { 00:10:26.078 "nbd_device": "/dev/nbd11", 00:10:26.078 "bdev_name": "Nvme2n1" 00:10:26.078 }, 00:10:26.078 { 00:10:26.078 "nbd_device": "/dev/nbd12", 00:10:26.078 "bdev_name": "Nvme2n2" 00:10:26.078 }, 00:10:26.078 { 00:10:26.078 "nbd_device": "/dev/nbd13", 00:10:26.078 "bdev_name": "Nvme2n3" 00:10:26.078 }, 00:10:26.078 { 00:10:26.078 "nbd_device": "/dev/nbd14", 00:10:26.078 "bdev_name": "Nvme3n1" 00:10:26.078 } 00:10:26.078 ]' 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:26.078 /dev/nbd1 00:10:26.078 /dev/nbd10 00:10:26.078 /dev/nbd11 00:10:26.078 /dev/nbd12 00:10:26.078 /dev/nbd13 00:10:26.078 /dev/nbd14' 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:26.078 /dev/nbd1 00:10:26.078 /dev/nbd10 00:10:26.078 /dev/nbd11 00:10:26.078 /dev/nbd12 00:10:26.078 /dev/nbd13 00:10:26.078 /dev/nbd14' 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:26.078 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:26.079 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:26.079 256+0 records in 00:10:26.079 256+0 records out 00:10:26.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00527344 s, 199 MB/s 00:10:26.079 11:17:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.079 11:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:26.079 256+0 records in 00:10:26.079 256+0 records out 00:10:26.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139497 s, 7.5 MB/s 00:10:26.079 11:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.079 11:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:26.338 256+0 records in 00:10:26.338 256+0 records out 00:10:26.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148417 s, 7.1 MB/s 00:10:26.338 11:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.338 11:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:26.597 256+0 records in 00:10:26.597 256+0 records out 00:10:26.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146408 s, 7.2 MB/s 00:10:26.597 11:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.597 11:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:26.597 256+0 records in 00:10:26.597 256+0 records out 00:10:26.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146739 s, 7.1 MB/s 00:10:26.597 11:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.597 11:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:26.856 256+0 records in 00:10:26.856 256+0 records out 00:10:26.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146752 s, 7.1 MB/s 00:10:26.856 11:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.856 11:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:26.856 256+0 records in 00:10:26.856 256+0 records out 00:10:26.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15326 s, 6.8 MB/s 00:10:26.856 11:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.856 11:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:27.115 256+0 records in 00:10:27.115 256+0 records out 00:10:27.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163816 s, 6.4 MB/s 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:27.115 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:27.374 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:27.374 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:27.374 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:27.374 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:27.374 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:27.374 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:27.374 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:27.374 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:27.374 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:27.374 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:27.633 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:27.633 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:27.633 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:27.633 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:27.633 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:27.633 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:27.633 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:27.633 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:27.633 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:27.633 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:27.893 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:27.893 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:27.893 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:27.893 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:27.893 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:27.893 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:27.893 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:27.893 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:27.893 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:27.893 11:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:28.152 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:28.152 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:28.152 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:28.152 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:28.152 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:28.152 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:28.152 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:28.152 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:28.153 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:28.153 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:28.412 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:28.979 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:28.979 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:28.979 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:28.979 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:28.979 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:28.979 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:28.979 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:28.979 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:28.980 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:28.980 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.980 11:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:28.980 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:29.238 malloc_lvol_verify 00:10:29.238 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:29.496 7edd41e5-d659-47e8-b3ff-ea73b84aa384 00:10:29.496 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:29.755 e4ae06ff-0a21-4a33-95ad-fc1c3715fde7 00:10:29.755 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:30.014 /dev/nbd0 00:10:30.014 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:30.014 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:30.014 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:30.014 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:30.014 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:30.014 mke2fs 1.47.0 (5-Feb-2023) 00:10:30.014 Discarding device blocks: 0/4096 done 00:10:30.014 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:30.014 00:10:30.014 Allocating group tables: 0/1 done 00:10:30.015 Writing inode tables: 0/1 done 00:10:30.015 Creating journal (1024 blocks): done 00:10:30.015 Writing superblocks and filesystem accounting information: 0/1 done 00:10:30.015 00:10:30.015 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:30.015 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:30.015 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:30.015 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:30.015 11:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:30.015 11:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:30.015 11:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62760 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62760 ']' 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62760 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62760 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.274 killing process with pid 62760 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62760' 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62760 00:10:30.274 11:17:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62760 00:10:31.678 11:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:31.678 00:10:31.678 real 0m12.477s 00:10:31.678 user 0m16.244s 00:10:31.678 sys 0m5.130s 00:10:31.678 11:17:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.678 11:17:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:31.678 ************************************ 00:10:31.678 END TEST bdev_nbd 00:10:31.678 ************************************ 00:10:31.678 11:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:10:31.678 11:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:10:31.678 11:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:10:31.678 skipping fio tests on NVMe due to multi-ns failures. 00:10:31.678 11:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:31.678 11:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:31.679 11:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:31.679 11:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:31.679 11:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.679 11:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:31.679 ************************************ 00:10:31.679 START TEST bdev_verify 00:10:31.679 ************************************ 00:10:31.679 11:17:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:31.679 [2024-12-10 11:17:58.652641] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:31.679 [2024-12-10 11:17:58.652800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63189 ] 00:10:31.937 [2024-12-10 11:17:58.838400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:31.937 [2024-12-10 11:17:58.953675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.937 [2024-12-10 11:17:58.953706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.873 Running I/O for 5 seconds... 00:10:35.186 20608.00 IOPS, 80.50 MiB/s [2024-12-10T11:18:03.236Z] 21152.00 IOPS, 82.62 MiB/s [2024-12-10T11:18:04.196Z] 21610.67 IOPS, 84.42 MiB/s [2024-12-10T11:18:05.133Z] 21344.00 IOPS, 83.38 MiB/s [2024-12-10T11:18:05.133Z] 21145.60 IOPS, 82.60 MiB/s 00:10:38.019 Latency(us) 00:10:38.019 [2024-12-10T11:18:05.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:38.019 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:38.019 Verification LBA range: start 0x0 length 0xbd0bd 00:10:38.019 Nvme0n1 : 5.08 1512.58 5.91 0.00 0.00 84453.19 21055.74 87591.89 00:10:38.019 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:38.019 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:38.019 Nvme0n1 : 5.08 1486.33 5.81 0.00 0.00 85937.12 17265.71 88434.12 00:10:38.019 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:38.019 Verification LBA range: start 0x0 length 0x4ff80 00:10:38.019 Nvme1n1p1 : 5.08 1512.12 5.91 0.00 0.00 84320.33 18739.61 81275.17 00:10:38.019 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:38.019 Verification LBA range: start 0x4ff80 length 0x4ff80 00:10:38.019 Nvme1n1p1 : 5.08 1485.81 5.80 0.00 0.00 85814.48 15160.13 82538.51 00:10:38.019 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:38.019 Verification LBA range: start 0x0 length 0x4ff7f 00:10:38.019 Nvme1n1p2 : 5.08 1511.65 5.90 0.00 0.00 84154.66 17476.27 72010.64 00:10:38.019 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:38.019 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:10:38.020 Nvme1n1p2 : 5.08 1485.41 5.80 0.00 0.00 85644.21 14107.35 78748.48 00:10:38.020 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:38.020 Verification LBA range: start 0x0 length 0x80000 00:10:38.020 Nvme2n1 : 5.08 1510.69 5.90 0.00 0.00 84064.27 19055.45 64430.57 00:10:38.020 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:38.020 Verification LBA range: start 0x80000 length 0x80000 00:10:38.020 Nvme2n1 : 5.09 1484.46 5.80 0.00 0.00 85494.92 16423.48 77906.25 00:10:38.020 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:38.020 Verification LBA range: start 0x0 length 0x80000 00:10:38.020 Nvme2n2 : 5.09 1510.08 5.90 0.00 0.00 83948.38 18950.17 59798.31 00:10:38.020 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:38.020 Verification LBA range: start 0x80000 length 0x80000 00:10:38.020 Nvme2n2 : 5.09 1483.99 5.80 0.00 0.00 85358.96 16318.20 76221.79 00:10:38.020 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:38.020 Verification LBA range: start 0x0 length 0x80000 00:10:38.020 Nvme2n3 : 5.09 1509.77 5.90 0.00 0.00 83807.46 18529.05 61903.88 00:10:38.020 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:38.020 Verification LBA range: start 0x80000 length 0x80000 00:10:38.020 Nvme2n3 : 5.09 1483.54 5.80 0.00 0.00 85220.35 16949.87 77485.13 00:10:38.020 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:38.020 Verification LBA range: start 0x0 length 0x20000 00:10:38.020 Nvme3n1 : 5.09 1509.32 5.90 0.00 0.00 83672.74 18107.94 64430.57 00:10:38.020 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:38.020 Verification LBA range: start 0x20000 length 0x20000 00:10:38.020 Nvme3n1 : 5.09 1483.22 5.79 0.00 0.00 85103.06 16844.59 79169.59 00:10:38.020 [2024-12-10T11:18:05.134Z] =================================================================================================================== 00:10:38.020 [2024-12-10T11:18:05.134Z] Total : 20968.97 81.91 0.00 0.00 84779.20 14107.35 88434.12 00:10:39.395 00:10:39.395 real 0m7.760s 00:10:39.395 user 0m14.322s 00:10:39.395 sys 0m0.335s 00:10:39.395 11:18:06 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.395 11:18:06 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:39.395 ************************************ 00:10:39.395 END TEST bdev_verify 00:10:39.395 ************************************ 00:10:39.395 11:18:06 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:39.395 11:18:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:39.395 11:18:06 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.395 11:18:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:39.395 ************************************ 00:10:39.395 START TEST bdev_verify_big_io 00:10:39.395 ************************************ 00:10:39.395 11:18:06 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:39.395 [2024-12-10 11:18:06.465739] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:39.395 [2024-12-10 11:18:06.465895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63287 ] 00:10:39.654 [2024-12-10 11:18:06.645938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:39.913 [2024-12-10 11:18:06.766811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.913 [2024-12-10 11:18:06.766827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.850 Running I/O for 5 seconds... 00:10:45.320 1269.00 IOPS, 79.31 MiB/s [2024-12-10T11:18:13.811Z] 3066.00 IOPS, 191.62 MiB/s [2024-12-10T11:18:13.811Z] 3903.67 IOPS, 243.98 MiB/s 00:10:46.697 Latency(us) 00:10:46.697 [2024-12-10T11:18:13.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.698 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:46.698 Verification LBA range: start 0x0 length 0xbd0b 00:10:46.698 Nvme0n1 : 5.64 136.23 8.51 0.00 0.00 899236.98 16423.48 902870.26 00:10:46.698 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:46.698 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:46.698 Nvme0n1 : 5.61 153.06 9.57 0.00 0.00 807114.18 24951.06 963510.80 00:10:46.698 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:46.698 Verification LBA range: start 0x0 length 0x4ff8 00:10:46.698 Nvme1n1p1 : 5.70 139.37 8.71 0.00 0.00 869339.65 70326.18 1327354.04 00:10:46.698 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:46.698 Verification LBA range: start 0x4ff8 length 0x4ff8 00:10:46.698 Nvme1n1p1 : 5.62 159.55 9.97 0.00 0.00 768097.76 70326.18 791695.94 00:10:46.698 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:46.698 Verification LBA range: start 0x0 length 0x4ff7 00:10:46.698 Nvme1n1p2 : 5.70 138.80 8.68 0.00 0.00 850326.46 72852.87 1179121.61 00:10:46.698 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:46.698 Verification LBA range: start 0x4ff7 length 0x4ff7 00:10:46.698 Nvme1n1p2 : 5.66 155.72 9.73 0.00 0.00 763071.58 69905.07 1024151.34 00:10:46.698 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:46.698 Verification LBA range: start 0x0 length 0x8000 00:10:46.698 Nvme2n1 : 5.74 142.28 8.89 0.00 0.00 810651.65 49481.00 1381256.74 00:10:46.698 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:46.698 Verification LBA range: start 0x8000 length 0x8000 00:10:46.698 Nvme2n1 : 5.70 162.58 10.16 0.00 0.00 722366.60 45901.52 848967.56 00:10:46.698 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:46.698 Verification LBA range: start 0x0 length 0x8000 00:10:46.698 Nvme2n2 : 5.75 147.24 9.20 0.00 0.00 769665.31 41900.93 1394732.41 00:10:46.698 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:46.698 Verification LBA range: start 0x8000 length 0x8000 00:10:46.698 Nvme2n2 : 5.70 167.79 10.49 0.00 0.00 688233.17 30109.71 855705.39 00:10:46.698 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:46.698 Verification LBA range: start 0x0 length 0x8000 00:10:46.698 Nvme2n3 : 5.79 157.69 9.86 0.00 0.00 701920.58 9790.92 1428421.60 00:10:46.698 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:46.698 Verification LBA range: start 0x8000 length 0x8000 00:10:46.698 Nvme2n3 : 5.73 172.91 10.81 0.00 0.00 654022.29 21687.42 852336.48 00:10:46.698 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:46.698 Verification LBA range: start 0x0 length 0x2000 00:10:46.698 Nvme3n1 : 5.84 189.11 11.82 0.00 0.00 578446.19 792.88 1462110.79 00:10:46.698 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:46.698 Verification LBA range: start 0x2000 length 0x2000 00:10:46.698 Nvme3n1 : 5.75 183.96 11.50 0.00 0.00 603048.93 6316.72 744531.07 00:10:46.698 [2024-12-10T11:18:13.812Z] =================================================================================================================== 00:10:46.698 [2024-12-10T11:18:13.812Z] Total : 2206.30 137.89 0.00 0.00 739167.64 792.88 1462110.79 00:10:48.625 00:10:48.625 real 0m9.086s 00:10:48.625 user 0m17.003s 00:10:48.625 sys 0m0.327s 00:10:48.625 11:18:15 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.625 ************************************ 00:10:48.625 END TEST bdev_verify_big_io 00:10:48.625 11:18:15 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.625 ************************************ 00:10:48.625 11:18:15 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:48.625 11:18:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:48.625 11:18:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.625 11:18:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:48.625 ************************************ 00:10:48.625 START TEST bdev_write_zeroes 00:10:48.625 ************************************ 00:10:48.625 11:18:15 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:48.625 [2024-12-10 11:18:15.624952] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:48.626 [2024-12-10 11:18:15.625246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63402 ] 00:10:48.885 [2024-12-10 11:18:15.805837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.885 [2024-12-10 11:18:15.919673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.821 Running I/O for 1 seconds... 00:10:50.757 67648.00 IOPS, 264.25 MiB/s 00:10:50.757 Latency(us) 00:10:50.757 [2024-12-10T11:18:17.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.757 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:50.757 Nvme0n1 : 1.02 9637.10 37.64 0.00 0.00 13243.58 10685.79 36426.44 00:10:50.757 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:50.757 Nvme1n1p1 : 1.02 9626.63 37.60 0.00 0.00 13240.16 10580.51 38532.01 00:10:50.757 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:50.757 Nvme1n1p2 : 1.02 9616.83 37.57 0.00 0.00 13207.99 10422.59 35794.76 00:10:50.757 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:50.757 Nvme2n1 : 1.03 9607.68 37.53 0.00 0.00 13201.12 10685.79 36005.32 00:10:50.757 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:50.757 Nvme2n2 : 1.03 9651.94 37.70 0.00 0.00 13081.53 7106.31 30109.71 00:10:50.757 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:50.757 Nvme2n3 : 1.03 9643.09 37.67 0.00 0.00 13055.01 7106.31 29056.93 00:10:50.757 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:50.757 Nvme3n1 : 1.03 9634.42 37.63 0.00 0.00 13029.51 7053.67 28004.14 00:10:50.757 [2024-12-10T11:18:17.871Z] =================================================================================================================== 00:10:50.757 [2024-12-10T11:18:17.871Z] Total : 67417.68 263.35 0.00 0.00 13151.01 7053.67 38532.01 00:10:51.694 00:10:51.694 real 0m3.267s 00:10:51.694 user 0m2.896s 00:10:51.694 sys 0m0.255s 00:10:51.694 11:18:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.694 ************************************ 00:10:51.694 END TEST bdev_write_zeroes 00:10:51.694 ************************************ 00:10:51.694 11:18:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:51.953 11:18:18 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:51.953 11:18:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:51.953 11:18:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.953 11:18:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:51.953 ************************************ 00:10:51.953 START TEST bdev_json_nonenclosed 00:10:51.953 ************************************ 00:10:51.953 11:18:18 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:51.953 [2024-12-10 11:18:18.971397] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:51.953 [2024-12-10 11:18:18.971540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63455 ] 00:10:52.213 [2024-12-10 11:18:19.154938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.213 [2024-12-10 11:18:19.266147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.213 [2024-12-10 11:18:19.266252] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:52.213 [2024-12-10 11:18:19.266274] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:52.213 [2024-12-10 11:18:19.266286] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:52.471 00:10:52.471 real 0m0.637s 00:10:52.471 user 0m0.400s 00:10:52.471 sys 0m0.132s 00:10:52.471 11:18:19 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.471 11:18:19 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:52.471 ************************************ 00:10:52.471 END TEST bdev_json_nonenclosed 00:10:52.471 ************************************ 00:10:52.471 11:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:52.471 11:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:52.471 11:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.471 11:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:52.730 ************************************ 00:10:52.730 START TEST bdev_json_nonarray 00:10:52.730 ************************************ 00:10:52.730 11:18:19 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:52.730 [2024-12-10 11:18:19.678294] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:52.730 [2024-12-10 11:18:19.678433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63486 ] 00:10:52.989 [2024-12-10 11:18:19.856327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.989 [2024-12-10 11:18:19.968595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.989 [2024-12-10 11:18:19.968711] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:52.989 [2024-12-10 11:18:19.968735] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:52.989 [2024-12-10 11:18:19.968747] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:53.248 00:10:53.248 real 0m0.630s 00:10:53.248 user 0m0.385s 00:10:53.248 sys 0m0.142s 00:10:53.248 ************************************ 00:10:53.248 END TEST bdev_json_nonarray 00:10:53.248 11:18:20 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.248 11:18:20 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:53.248 ************************************ 00:10:53.248 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:10:53.248 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:10:53.248 11:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:53.248 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:53.248 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.248 11:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:53.248 ************************************ 00:10:53.248 START TEST bdev_gpt_uuid 00:10:53.248 ************************************ 00:10:53.248 11:18:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:10:53.248 11:18:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:10:53.248 11:18:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:10:53.248 11:18:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63511 00:10:53.248 11:18:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:53.248 11:18:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63511 00:10:53.248 11:18:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63511 ']' 00:10:53.248 11:18:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.248 11:18:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.248 11:18:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.248 11:18:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.248 11:18:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:53.248 11:18:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:53.507 [2024-12-10 11:18:20.398513] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:53.507 [2024-12-10 11:18:20.399033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63511 ] 00:10:53.507 [2024-12-10 11:18:20.581771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.766 [2024-12-10 11:18:20.688777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.703 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.703 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:10:54.703 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:54.703 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.703 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:54.962 Some configs were skipped because the RPC state that can call them passed over. 00:10:54.962 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.962 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:10:54.962 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.962 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:54.962 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.962 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:54.962 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.962 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:54.962 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.962 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:10:54.962 { 00:10:54.962 "name": "Nvme1n1p1", 00:10:54.962 "aliases": [ 00:10:54.962 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:54.962 ], 00:10:54.962 "product_name": "GPT Disk", 00:10:54.962 "block_size": 4096, 00:10:54.962 "num_blocks": 655104, 00:10:54.962 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:54.962 "assigned_rate_limits": { 00:10:54.962 "rw_ios_per_sec": 0, 00:10:54.962 "rw_mbytes_per_sec": 0, 00:10:54.962 "r_mbytes_per_sec": 0, 00:10:54.962 "w_mbytes_per_sec": 0 00:10:54.962 }, 00:10:54.962 "claimed": false, 00:10:54.962 "zoned": false, 00:10:54.962 "supported_io_types": { 00:10:54.962 "read": true, 00:10:54.962 "write": true, 00:10:54.962 "unmap": true, 00:10:54.962 "flush": true, 00:10:54.962 "reset": true, 00:10:54.962 "nvme_admin": false, 00:10:54.962 "nvme_io": false, 00:10:54.962 "nvme_io_md": false, 00:10:54.962 "write_zeroes": true, 00:10:54.962 "zcopy": false, 00:10:54.962 "get_zone_info": false, 00:10:54.962 "zone_management": false, 00:10:54.962 "zone_append": false, 00:10:54.962 "compare": true, 00:10:54.962 "compare_and_write": false, 00:10:54.962 "abort": true, 00:10:54.963 "seek_hole": false, 00:10:54.963 "seek_data": false, 00:10:54.963 "copy": true, 00:10:54.963 "nvme_iov_md": false 00:10:54.963 }, 00:10:54.963 "driver_specific": { 00:10:54.963 "gpt": { 00:10:54.963 "base_bdev": "Nvme1n1", 00:10:54.963 "offset_blocks": 256, 00:10:54.963 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:54.963 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:54.963 "partition_name": "SPDK_TEST_first" 00:10:54.963 } 00:10:54.963 } 00:10:54.963 } 00:10:54.963 ]' 00:10:54.963 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:10:54.963 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:10:54.963 11:18:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:10:54.963 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:54.963 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:10:55.222 { 00:10:55.222 "name": "Nvme1n1p2", 00:10:55.222 "aliases": [ 00:10:55.222 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:55.222 ], 00:10:55.222 "product_name": "GPT Disk", 00:10:55.222 "block_size": 4096, 00:10:55.222 "num_blocks": 655103, 00:10:55.222 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:55.222 "assigned_rate_limits": { 00:10:55.222 "rw_ios_per_sec": 0, 00:10:55.222 "rw_mbytes_per_sec": 0, 00:10:55.222 "r_mbytes_per_sec": 0, 00:10:55.222 "w_mbytes_per_sec": 0 00:10:55.222 }, 00:10:55.222 "claimed": false, 00:10:55.222 "zoned": false, 00:10:55.222 "supported_io_types": { 00:10:55.222 "read": true, 00:10:55.222 "write": true, 00:10:55.222 "unmap": true, 00:10:55.222 "flush": true, 00:10:55.222 "reset": true, 00:10:55.222 "nvme_admin": false, 00:10:55.222 "nvme_io": false, 00:10:55.222 "nvme_io_md": false, 00:10:55.222 "write_zeroes": true, 00:10:55.222 "zcopy": false, 00:10:55.222 "get_zone_info": false, 00:10:55.222 "zone_management": false, 00:10:55.222 "zone_append": false, 00:10:55.222 "compare": true, 00:10:55.222 "compare_and_write": false, 00:10:55.222 "abort": true, 00:10:55.222 "seek_hole": false, 00:10:55.222 "seek_data": false, 00:10:55.222 "copy": true, 00:10:55.222 "nvme_iov_md": false 00:10:55.222 }, 00:10:55.222 "driver_specific": { 00:10:55.222 "gpt": { 00:10:55.222 "base_bdev": "Nvme1n1", 00:10:55.222 "offset_blocks": 655360, 00:10:55.222 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:55.222 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:55.222 "partition_name": "SPDK_TEST_second" 00:10:55.222 } 00:10:55.222 } 00:10:55.222 } 00:10:55.222 ]' 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63511 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63511 ']' 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63511 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63511 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.222 killing process with pid 63511 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63511' 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63511 00:10:55.222 11:18:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63511 00:10:57.792 00:10:57.792 real 0m4.283s 00:10:57.792 user 0m4.373s 00:10:57.792 sys 0m0.521s 00:10:57.792 11:18:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.792 ************************************ 00:10:57.792 END TEST bdev_gpt_uuid 00:10:57.792 ************************************ 00:10:57.792 11:18:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:57.792 11:18:24 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:10:57.792 11:18:24 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:10:57.792 11:18:24 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:10:57.792 11:18:24 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:57.792 11:18:24 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:57.792 11:18:24 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:10:57.792 11:18:24 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:10:57.792 11:18:24 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:10:57.792 11:18:24 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:58.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:58.617 Waiting for block devices as requested 00:10:58.617 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.617 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.874 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.874 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:04.139 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:04.139 11:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:11:04.139 11:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:11:04.139 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:04.139 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:04.139 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:04.139 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:04.140 11:18:31 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:11:04.140 00:11:04.140 real 1m5.312s 00:11:04.140 user 1m21.422s 00:11:04.140 sys 0m11.982s 00:11:04.140 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.140 11:18:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:04.140 ************************************ 00:11:04.140 END TEST blockdev_nvme_gpt 00:11:04.140 ************************************ 00:11:04.398 11:18:31 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:04.398 11:18:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:04.398 11:18:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.398 11:18:31 -- common/autotest_common.sh@10 -- # set +x 00:11:04.398 ************************************ 00:11:04.398 START TEST nvme 00:11:04.398 ************************************ 00:11:04.398 11:18:31 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:04.398 * Looking for test storage... 00:11:04.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:04.398 11:18:31 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:04.398 11:18:31 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:04.398 11:18:31 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:04.657 11:18:31 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:04.657 11:18:31 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.657 11:18:31 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.657 11:18:31 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.657 11:18:31 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.657 11:18:31 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.657 11:18:31 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.657 11:18:31 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.657 11:18:31 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.657 11:18:31 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.657 11:18:31 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.657 11:18:31 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.657 11:18:31 nvme -- scripts/common.sh@344 -- # case "$op" in 00:11:04.657 11:18:31 nvme -- scripts/common.sh@345 -- # : 1 00:11:04.657 11:18:31 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.657 11:18:31 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.657 11:18:31 nvme -- scripts/common.sh@365 -- # decimal 1 00:11:04.657 11:18:31 nvme -- scripts/common.sh@353 -- # local d=1 00:11:04.657 11:18:31 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.657 11:18:31 nvme -- scripts/common.sh@355 -- # echo 1 00:11:04.657 11:18:31 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.657 11:18:31 nvme -- scripts/common.sh@366 -- # decimal 2 00:11:04.657 11:18:31 nvme -- scripts/common.sh@353 -- # local d=2 00:11:04.657 11:18:31 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.657 11:18:31 nvme -- scripts/common.sh@355 -- # echo 2 00:11:04.657 11:18:31 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.657 11:18:31 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.657 11:18:31 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.657 11:18:31 nvme -- scripts/common.sh@368 -- # return 0 00:11:04.657 11:18:31 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.657 11:18:31 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:04.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.657 --rc genhtml_branch_coverage=1 00:11:04.657 --rc genhtml_function_coverage=1 00:11:04.657 --rc genhtml_legend=1 00:11:04.657 --rc geninfo_all_blocks=1 00:11:04.657 --rc geninfo_unexecuted_blocks=1 00:11:04.657 00:11:04.657 ' 00:11:04.657 11:18:31 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:04.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.657 --rc genhtml_branch_coverage=1 00:11:04.657 --rc genhtml_function_coverage=1 00:11:04.657 --rc genhtml_legend=1 00:11:04.657 --rc geninfo_all_blocks=1 00:11:04.657 --rc geninfo_unexecuted_blocks=1 00:11:04.657 00:11:04.657 ' 00:11:04.657 11:18:31 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:04.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.657 --rc genhtml_branch_coverage=1 00:11:04.657 --rc genhtml_function_coverage=1 00:11:04.657 --rc genhtml_legend=1 00:11:04.657 --rc geninfo_all_blocks=1 00:11:04.657 --rc geninfo_unexecuted_blocks=1 00:11:04.657 00:11:04.657 ' 00:11:04.657 11:18:31 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:04.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.657 --rc genhtml_branch_coverage=1 00:11:04.657 --rc genhtml_function_coverage=1 00:11:04.657 --rc genhtml_legend=1 00:11:04.657 --rc geninfo_all_blocks=1 00:11:04.657 --rc geninfo_unexecuted_blocks=1 00:11:04.657 00:11:04.657 ' 00:11:04.657 11:18:31 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:05.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:06.166 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:06.167 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:06.167 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:06.167 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:06.167 11:18:33 nvme -- nvme/nvme.sh@79 -- # uname 00:11:06.167 11:18:33 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:06.167 11:18:33 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:06.167 11:18:33 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:06.167 11:18:33 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:06.167 11:18:33 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:11:06.167 11:18:33 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:11:06.167 11:18:33 nvme -- common/autotest_common.sh@1075 -- # stubpid=64177 00:11:06.167 11:18:33 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:06.167 Waiting for stub to ready for secondary processes... 00:11:06.167 11:18:33 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:11:06.167 11:18:33 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:06.167 11:18:33 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64177 ]] 00:11:06.167 11:18:33 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:11:06.167 [2024-12-10 11:18:33.244080] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:06.167 [2024-12-10 11:18:33.244222] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:11:07.102 11:18:34 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:07.102 11:18:34 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64177 ]] 00:11:07.102 11:18:34 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:11:07.361 [2024-12-10 11:18:34.246321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:07.361 [2024-12-10 11:18:34.355508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.361 [2024-12-10 11:18:34.355606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.361 [2024-12-10 11:18:34.355621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.361 [2024-12-10 11:18:34.372976] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:11:07.361 [2024-12-10 11:18:34.373033] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:07.361 [2024-12-10 11:18:34.389642] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:07.361 [2024-12-10 11:18:34.389809] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:07.361 [2024-12-10 11:18:34.392949] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:07.361 [2024-12-10 11:18:34.393201] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:07.361 [2024-12-10 11:18:34.393300] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:07.361 [2024-12-10 11:18:34.396838] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:07.361 [2024-12-10 11:18:34.397088] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:07.361 [2024-12-10 11:18:34.397199] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:07.361 [2024-12-10 11:18:34.401247] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:07.361 [2024-12-10 11:18:34.401488] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:07.361 [2024-12-10 11:18:34.401591] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:07.361 [2024-12-10 11:18:34.401662] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:07.361 [2024-12-10 11:18:34.401726] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:08.296 11:18:35 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:08.296 done. 00:11:08.296 11:18:35 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:11:08.296 11:18:35 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:08.296 11:18:35 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:11:08.296 11:18:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.296 11:18:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:08.296 ************************************ 00:11:08.296 START TEST nvme_reset 00:11:08.296 ************************************ 00:11:08.296 11:18:35 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:08.554 Initializing NVMe Controllers 00:11:08.554 Skipping QEMU NVMe SSD at 0000:00:10.0 00:11:08.554 Skipping QEMU NVMe SSD at 0000:00:11.0 00:11:08.554 Skipping QEMU NVMe SSD at 0000:00:13.0 00:11:08.554 Skipping QEMU NVMe SSD at 0000:00:12.0 00:11:08.554 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:08.554 00:11:08.554 real 0m0.286s 00:11:08.554 user 0m0.096s 00:11:08.554 sys 0m0.145s 00:11:08.554 11:18:35 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.554 11:18:35 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:11:08.554 ************************************ 00:11:08.554 END TEST nvme_reset 00:11:08.554 ************************************ 00:11:08.554 11:18:35 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:08.554 11:18:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:08.554 11:18:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.554 11:18:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:08.554 ************************************ 00:11:08.554 START TEST nvme_identify 00:11:08.554 ************************************ 00:11:08.554 11:18:35 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:11:08.554 11:18:35 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:11:08.554 11:18:35 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:08.554 11:18:35 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:08.554 11:18:35 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:08.554 11:18:35 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:08.554 11:18:35 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:11:08.554 11:18:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:08.554 11:18:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:08.554 11:18:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:08.813 11:18:35 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:08.813 11:18:35 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:08.813 11:18:35 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:09.074 [2024-12-10 11:18:35.945347] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64210 terminated unexpected 00:11:09.074 ===================================================== 00:11:09.074 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:09.074 ===================================================== 00:11:09.074 Controller Capabilities/Features 00:11:09.074 ================================ 00:11:09.074 Vendor ID: 1b36 00:11:09.074 Subsystem Vendor ID: 1af4 00:11:09.074 Serial Number: 12340 00:11:09.074 Model Number: QEMU NVMe Ctrl 00:11:09.074 Firmware Version: 8.0.0 00:11:09.074 Recommended Arb Burst: 6 00:11:09.074 IEEE OUI Identifier: 00 54 52 00:11:09.074 Multi-path I/O 00:11:09.074 May have multiple subsystem ports: No 00:11:09.074 May have multiple controllers: No 00:11:09.074 Associated with SR-IOV VF: No 00:11:09.074 Max Data Transfer Size: 524288 00:11:09.074 Max Number of Namespaces: 256 00:11:09.074 Max Number of I/O Queues: 64 00:11:09.074 NVMe Specification Version (VS): 1.4 00:11:09.074 NVMe Specification Version (Identify): 1.4 00:11:09.074 Maximum Queue Entries: 2048 00:11:09.074 Contiguous Queues Required: Yes 00:11:09.074 Arbitration Mechanisms Supported 00:11:09.074 Weighted Round Robin: Not Supported 00:11:09.074 Vendor Specific: Not Supported 00:11:09.074 Reset Timeout: 7500 ms 00:11:09.074 Doorbell Stride: 4 bytes 00:11:09.074 NVM Subsystem Reset: Not Supported 00:11:09.074 Command Sets Supported 00:11:09.074 NVM Command Set: Supported 00:11:09.074 Boot Partition: Not Supported 00:11:09.074 Memory Page Size Minimum: 4096 bytes 00:11:09.074 Memory Page Size Maximum: 65536 bytes 00:11:09.074 Persistent Memory Region: Not Supported 00:11:09.074 Optional Asynchronous Events Supported 00:11:09.074 Namespace Attribute Notices: Supported 00:11:09.074 Firmware Activation Notices: Not Supported 00:11:09.074 ANA Change Notices: Not Supported 00:11:09.074 PLE Aggregate Log Change Notices: Not Supported 00:11:09.074 LBA Status Info Alert Notices: Not Supported 00:11:09.074 EGE Aggregate Log Change Notices: Not Supported 00:11:09.074 Normal NVM Subsystem Shutdown event: Not Supported 00:11:09.074 Zone Descriptor Change Notices: Not Supported 00:11:09.074 Discovery Log Change Notices: Not Supported 00:11:09.074 Controller Attributes 00:11:09.074 128-bit Host Identifier: Not Supported 00:11:09.074 Non-Operational Permissive Mode: Not Supported 00:11:09.074 NVM Sets: Not Supported 00:11:09.074 Read Recovery Levels: Not Supported 00:11:09.074 Endurance Groups: Not Supported 00:11:09.074 Predictable Latency Mode: Not Supported 00:11:09.074 Traffic Based Keep ALive: Not Supported 00:11:09.074 Namespace Granularity: Not Supported 00:11:09.074 SQ Associations: Not Supported 00:11:09.074 UUID List: Not Supported 00:11:09.074 Multi-Domain Subsystem: Not Supported 00:11:09.074 Fixed Capacity Management: Not Supported 00:11:09.074 Variable Capacity Management: Not Supported 00:11:09.074 Delete Endurance Group: Not Supported 00:11:09.074 Delete NVM Set: Not Supported 00:11:09.074 Extended LBA Formats Supported: Supported 00:11:09.074 Flexible Data Placement Supported: Not Supported 00:11:09.074 00:11:09.074 Controller Memory Buffer Support 00:11:09.074 ================================ 00:11:09.074 Supported: No 00:11:09.074 00:11:09.074 Persistent Memory Region Support 00:11:09.074 ================================ 00:11:09.074 Supported: No 00:11:09.074 00:11:09.074 Admin Command Set Attributes 00:11:09.074 ============================ 00:11:09.074 Security Send/Receive: Not Supported 00:11:09.074 Format NVM: Supported 00:11:09.074 Firmware Activate/Download: Not Supported 00:11:09.074 Namespace Management: Supported 00:11:09.074 Device Self-Test: Not Supported 00:11:09.074 Directives: Supported 00:11:09.074 NVMe-MI: Not Supported 00:11:09.074 Virtualization Management: Not Supported 00:11:09.074 Doorbell Buffer Config: Supported 00:11:09.074 Get LBA Status Capability: Not Supported 00:11:09.074 Command & Feature Lockdown Capability: Not Supported 00:11:09.074 Abort Command Limit: 4 00:11:09.074 Async Event Request Limit: 4 00:11:09.074 Number of Firmware Slots: N/A 00:11:09.074 Firmware Slot 1 Read-Only: N/A 00:11:09.074 Firmware Activation Without Reset: N/A 00:11:09.074 Multiple Update Detection Support: N/A 00:11:09.074 Firmware Update Granularity: No Information Provided 00:11:09.074 Per-Namespace SMART Log: Yes 00:11:09.074 Asymmetric Namespace Access Log Page: Not Supported 00:11:09.074 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:09.074 Command Effects Log Page: Supported 00:11:09.074 Get Log Page Extended Data: Supported 00:11:09.075 Telemetry Log Pages: Not Supported 00:11:09.075 Persistent Event Log Pages: Not Supported 00:11:09.075 Supported Log Pages Log Page: May Support 00:11:09.075 Commands Supported & Effects Log Page: Not Supported 00:11:09.075 Feature Identifiers & Effects Log Page:May Support 00:11:09.075 NVMe-MI Commands & Effects Log Page: May Support 00:11:09.075 Data Area 4 for Telemetry Log: Not Supported 00:11:09.075 Error Log Page Entries Supported: 1 00:11:09.075 Keep Alive: Not Supported 00:11:09.075 00:11:09.075 NVM Command Set Attributes 00:11:09.075 ========================== 00:11:09.075 Submission Queue Entry Size 00:11:09.075 Max: 64 00:11:09.075 Min: 64 00:11:09.075 Completion Queue Entry Size 00:11:09.075 Max: 16 00:11:09.075 Min: 16 00:11:09.075 Number of Namespaces: 256 00:11:09.075 Compare Command: Supported 00:11:09.075 Write Uncorrectable Command: Not Supported 00:11:09.075 Dataset Management Command: Supported 00:11:09.075 Write Zeroes Command: Supported 00:11:09.075 Set Features Save Field: Supported 00:11:09.075 Reservations: Not Supported 00:11:09.075 Timestamp: Supported 00:11:09.075 Copy: Supported 00:11:09.075 Volatile Write Cache: Present 00:11:09.075 Atomic Write Unit (Normal): 1 00:11:09.075 Atomic Write Unit (PFail): 1 00:11:09.075 Atomic Compare & Write Unit: 1 00:11:09.075 Fused Compare & Write: Not Supported 00:11:09.075 Scatter-Gather List 00:11:09.075 SGL Command Set: Supported 00:11:09.075 SGL Keyed: Not Supported 00:11:09.075 SGL Bit Bucket Descriptor: Not Supported 00:11:09.075 SGL Metadata Pointer: Not Supported 00:11:09.075 Oversized SGL: Not Supported 00:11:09.075 SGL Metadata Address: Not Supported 00:11:09.075 SGL Offset: Not Supported 00:11:09.075 Transport SGL Data Block: Not Supported 00:11:09.075 Replay Protected Memory Block: Not Supported 00:11:09.075 00:11:09.075 Firmware Slot Information 00:11:09.075 ========================= 00:11:09.075 Active slot: 1 00:11:09.075 Slot 1 Firmware Revision: 1.0 00:11:09.075 00:11:09.075 00:11:09.075 Commands Supported and Effects 00:11:09.075 ============================== 00:11:09.075 Admin Commands 00:11:09.075 -------------- 00:11:09.075 Delete I/O Submission Queue (00h): Supported 00:11:09.075 Create I/O Submission Queue (01h): Supported 00:11:09.075 Get Log Page (02h): Supported 00:11:09.075 Delete I/O Completion Queue (04h): Supported 00:11:09.075 Create I/O Completion Queue (05h): Supported 00:11:09.075 Identify (06h): Supported 00:11:09.075 Abort (08h): Supported 00:11:09.075 Set Features (09h): Supported 00:11:09.075 Get Features (0Ah): Supported 00:11:09.075 Asynchronous Event Request (0Ch): Supported 00:11:09.075 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:09.075 Directive Send (19h): Supported 00:11:09.075 Directive Receive (1Ah): Supported 00:11:09.075 Virtualization Management (1Ch): Supported 00:11:09.075 Doorbell Buffer Config (7Ch): Supported 00:11:09.075 Format NVM (80h): Supported LBA-Change 00:11:09.075 I/O Commands 00:11:09.075 ------------ 00:11:09.075 Flush (00h): Supported LBA-Change 00:11:09.075 Write (01h): Supported LBA-Change 00:11:09.075 Read (02h): Supported 00:11:09.075 Compare (05h): Supported 00:11:09.075 Write Zeroes (08h): Supported LBA-Change 00:11:09.075 Dataset Management (09h): Supported LBA-Change 00:11:09.075 Unknown (0Ch): Supported 00:11:09.075 Unknown (12h): Supported 00:11:09.075 Copy (19h): Supported LBA-Change 00:11:09.075 Unknown (1Dh): Supported LBA-Change 00:11:09.075 00:11:09.075 Error Log 00:11:09.075 ========= 00:11:09.075 00:11:09.075 Arbitration 00:11:09.075 =========== 00:11:09.075 Arbitration Burst: no limit 00:11:09.075 00:11:09.075 Power Management 00:11:09.075 ================ 00:11:09.075 Number of Power States: 1 00:11:09.075 Current Power State: Power State #0 00:11:09.075 Power State #0: 00:11:09.075 Max Power: 25.00 W 00:11:09.075 Non-Operational State: Operational 00:11:09.075 Entry Latency: 16 microseconds 00:11:09.075 Exit Latency: 4 microseconds 00:11:09.075 Relative Read Throughput: 0 00:11:09.075 Relative Read Latency: 0 00:11:09.075 Relative Write Throughput: 0 00:11:09.075 Relative Write Latency: 0 00:11:09.075 Idle Power[2024-12-10 11:18:35.946714] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64210 terminated unexpected 00:11:09.075 : Not Reported 00:11:09.075 Active Power: Not Reported 00:11:09.075 Non-Operational Permissive Mode: Not Supported 00:11:09.075 00:11:09.075 Health Information 00:11:09.075 ================== 00:11:09.075 Critical Warnings: 00:11:09.075 Available Spare Space: OK 00:11:09.075 Temperature: OK 00:11:09.075 Device Reliability: OK 00:11:09.075 Read Only: No 00:11:09.075 Volatile Memory Backup: OK 00:11:09.075 Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.075 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:09.075 Available Spare: 0% 00:11:09.075 Available Spare Threshold: 0% 00:11:09.075 Life Percentage Used: 0% 00:11:09.075 Data Units Read: 766 00:11:09.075 Data Units Written: 695 00:11:09.075 Host Read Commands: 36367 00:11:09.075 Host Write Commands: 36153 00:11:09.075 Controller Busy Time: 0 minutes 00:11:09.075 Power Cycles: 0 00:11:09.075 Power On Hours: 0 hours 00:11:09.075 Unsafe Shutdowns: 0 00:11:09.075 Unrecoverable Media Errors: 0 00:11:09.075 Lifetime Error Log Entries: 0 00:11:09.075 Warning Temperature Time: 0 minutes 00:11:09.075 Critical Temperature Time: 0 minutes 00:11:09.075 00:11:09.075 Number of Queues 00:11:09.075 ================ 00:11:09.075 Number of I/O Submission Queues: 64 00:11:09.075 Number of I/O Completion Queues: 64 00:11:09.075 00:11:09.075 ZNS Specific Controller Data 00:11:09.075 ============================ 00:11:09.075 Zone Append Size Limit: 0 00:11:09.075 00:11:09.075 00:11:09.075 Active Namespaces 00:11:09.075 ================= 00:11:09.075 Namespace ID:1 00:11:09.075 Error Recovery Timeout: Unlimited 00:11:09.075 Command Set Identifier: NVM (00h) 00:11:09.075 Deallocate: Supported 00:11:09.075 Deallocated/Unwritten Error: Supported 00:11:09.075 Deallocated Read Value: All 0x00 00:11:09.075 Deallocate in Write Zeroes: Not Supported 00:11:09.075 Deallocated Guard Field: 0xFFFF 00:11:09.075 Flush: Supported 00:11:09.075 Reservation: Not Supported 00:11:09.075 Metadata Transferred as: Separate Metadata Buffer 00:11:09.075 Namespace Sharing Capabilities: Private 00:11:09.075 Size (in LBAs): 1548666 (5GiB) 00:11:09.075 Capacity (in LBAs): 1548666 (5GiB) 00:11:09.075 Utilization (in LBAs): 1548666 (5GiB) 00:11:09.075 Thin Provisioning: Not Supported 00:11:09.075 Per-NS Atomic Units: No 00:11:09.075 Maximum Single Source Range Length: 128 00:11:09.075 Maximum Copy Length: 128 00:11:09.075 Maximum Source Range Count: 128 00:11:09.075 NGUID/EUI64 Never Reused: No 00:11:09.075 Namespace Write Protected: No 00:11:09.075 Number of LBA Formats: 8 00:11:09.075 Current LBA Format: LBA Format #07 00:11:09.075 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:09.075 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:09.075 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:09.075 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:09.075 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:09.075 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:09.075 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:09.075 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:09.075 00:11:09.075 NVM Specific Namespace Data 00:11:09.075 =========================== 00:11:09.075 Logical Block Storage Tag Mask: 0 00:11:09.075 Protection Information Capabilities: 00:11:09.075 16b Guard Protection Information Storage Tag Support: No 00:11:09.075 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:09.075 Storage Tag Check Read Support: No 00:11:09.075 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.075 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.075 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.075 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.075 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.075 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.075 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.075 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.075 ===================================================== 00:11:09.075 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:09.075 ===================================================== 00:11:09.075 Controller Capabilities/Features 00:11:09.075 ================================ 00:11:09.075 Vendor ID: 1b36 00:11:09.075 Subsystem Vendor ID: 1af4 00:11:09.075 Serial Number: 12341 00:11:09.075 Model Number: QEMU NVMe Ctrl 00:11:09.075 Firmware Version: 8.0.0 00:11:09.075 Recommended Arb Burst: 6 00:11:09.075 IEEE OUI Identifier: 00 54 52 00:11:09.075 Multi-path I/O 00:11:09.076 May have multiple subsystem ports: No 00:11:09.076 May have multiple controllers: No 00:11:09.076 Associated with SR-IOV VF: No 00:11:09.076 Max Data Transfer Size: 524288 00:11:09.076 Max Number of Namespaces: 256 00:11:09.076 Max Number of I/O Queues: 64 00:11:09.076 NVMe Specification Version (VS): 1.4 00:11:09.076 NVMe Specification Version (Identify): 1.4 00:11:09.076 Maximum Queue Entries: 2048 00:11:09.076 Contiguous Queues Required: Yes 00:11:09.076 Arbitration Mechanisms Supported 00:11:09.076 Weighted Round Robin: Not Supported 00:11:09.076 Vendor Specific: Not Supported 00:11:09.076 Reset Timeout: 7500 ms 00:11:09.076 Doorbell Stride: 4 bytes 00:11:09.076 NVM Subsystem Reset: Not Supported 00:11:09.076 Command Sets Supported 00:11:09.076 NVM Command Set: Supported 00:11:09.076 Boot Partition: Not Supported 00:11:09.076 Memory Page Size Minimum: 4096 bytes 00:11:09.076 Memory Page Size Maximum: 65536 bytes 00:11:09.076 Persistent Memory Region: Not Supported 00:11:09.076 Optional Asynchronous Events Supported 00:11:09.076 Namespace Attribute Notices: Supported 00:11:09.076 Firmware Activation Notices: Not Supported 00:11:09.076 ANA Change Notices: Not Supported 00:11:09.076 PLE Aggregate Log Change Notices: Not Supported 00:11:09.076 LBA Status Info Alert Notices: Not Supported 00:11:09.076 EGE Aggregate Log Change Notices: Not Supported 00:11:09.076 Normal NVM Subsystem Shutdown event: Not Supported 00:11:09.076 Zone Descriptor Change Notices: Not Supported 00:11:09.076 Discovery Log Change Notices: Not Supported 00:11:09.076 Controller Attributes 00:11:09.076 128-bit Host Identifier: Not Supported 00:11:09.076 Non-Operational Permissive Mode: Not Supported 00:11:09.076 NVM Sets: Not Supported 00:11:09.076 Read Recovery Levels: Not Supported 00:11:09.076 Endurance Groups: Not Supported 00:11:09.076 Predictable Latency Mode: Not Supported 00:11:09.076 Traffic Based Keep ALive: Not Supported 00:11:09.076 Namespace Granularity: Not Supported 00:11:09.076 SQ Associations: Not Supported 00:11:09.076 UUID List: Not Supported 00:11:09.076 Multi-Domain Subsystem: Not Supported 00:11:09.076 Fixed Capacity Management: Not Supported 00:11:09.076 Variable Capacity Management: Not Supported 00:11:09.076 Delete Endurance Group: Not Supported 00:11:09.076 Delete NVM Set: Not Supported 00:11:09.076 Extended LBA Formats Supported: Supported 00:11:09.076 Flexible Data Placement Supported: Not Supported 00:11:09.076 00:11:09.076 Controller Memory Buffer Support 00:11:09.076 ================================ 00:11:09.076 Supported: No 00:11:09.076 00:11:09.076 Persistent Memory Region Support 00:11:09.076 ================================ 00:11:09.076 Supported: No 00:11:09.076 00:11:09.076 Admin Command Set Attributes 00:11:09.076 ============================ 00:11:09.076 Security Send/Receive: Not Supported 00:11:09.076 Format NVM: Supported 00:11:09.076 Firmware Activate/Download: Not Supported 00:11:09.076 Namespace Management: Supported 00:11:09.076 Device Self-Test: Not Supported 00:11:09.076 Directives: Supported 00:11:09.076 NVMe-MI: Not Supported 00:11:09.076 Virtualization Management: Not Supported 00:11:09.076 Doorbell Buffer Config: Supported 00:11:09.076 Get LBA Status Capability: Not Supported 00:11:09.076 Command & Feature Lockdown Capability: Not Supported 00:11:09.076 Abort Command Limit: 4 00:11:09.076 Async Event Request Limit: 4 00:11:09.076 Number of Firmware Slots: N/A 00:11:09.076 Firmware Slot 1 Read-Only: N/A 00:11:09.076 Firmware Activation Without Reset: N/A 00:11:09.076 Multiple Update Detection Support: N/A 00:11:09.076 Firmware Update Granularity: No Information Provided 00:11:09.076 Per-Namespace SMART Log: Yes 00:11:09.076 Asymmetric Namespace Access Log Page: Not Supported 00:11:09.076 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:09.076 Command Effects Log Page: Supported 00:11:09.076 Get Log Page Extended Data: Supported 00:11:09.076 Telemetry Log Pages: Not Supported 00:11:09.076 Persistent Event Log Pages: Not Supported 00:11:09.076 Supported Log Pages Log Page: May Support 00:11:09.076 Commands Supported & Effects Log Page: Not Supported 00:11:09.076 Feature Identifiers & Effects Log Page:May Support 00:11:09.076 NVMe-MI Commands & Effects Log Page: May Support 00:11:09.076 Data Area 4 for Telemetry Log: Not Supported 00:11:09.076 Error Log Page Entries Supported: 1 00:11:09.076 Keep Alive: Not Supported 00:11:09.076 00:11:09.076 NVM Command Set Attributes 00:11:09.076 ========================== 00:11:09.076 Submission Queue Entry Size 00:11:09.076 Max: 64 00:11:09.076 Min: 64 00:11:09.076 Completion Queue Entry Size 00:11:09.076 Max: 16 00:11:09.076 Min: 16 00:11:09.076 Number of Namespaces: 256 00:11:09.076 Compare Command: Supported 00:11:09.076 Write Uncorrectable Command: Not Supported 00:11:09.076 Dataset Management Command: Supported 00:11:09.076 Write Zeroes Command: Supported 00:11:09.076 Set Features Save Field: Supported 00:11:09.076 Reservations: Not Supported 00:11:09.076 Timestamp: Supported 00:11:09.076 Copy: Supported 00:11:09.076 Volatile Write Cache: Present 00:11:09.076 Atomic Write Unit (Normal): 1 00:11:09.076 Atomic Write Unit (PFail): 1 00:11:09.076 Atomic Compare & Write Unit: 1 00:11:09.076 Fused Compare & Write: Not Supported 00:11:09.076 Scatter-Gather List 00:11:09.076 SGL Command Set: Supported 00:11:09.076 SGL Keyed: Not Supported 00:11:09.076 SGL Bit Bucket Descriptor: Not Supported 00:11:09.076 SGL Metadata Pointer: Not Supported 00:11:09.076 Oversized SGL: Not Supported 00:11:09.076 SGL Metadata Address: Not Supported 00:11:09.076 SGL Offset: Not Supported 00:11:09.076 Transport SGL Data Block: Not Supported 00:11:09.076 Replay Protected Memory Block: Not Supported 00:11:09.076 00:11:09.076 Firmware Slot Information 00:11:09.076 ========================= 00:11:09.076 Active slot: 1 00:11:09.076 Slot 1 Firmware Revision: 1.0 00:11:09.076 00:11:09.076 00:11:09.076 Commands Supported and Effects 00:11:09.076 ============================== 00:11:09.076 Admin Commands 00:11:09.076 -------------- 00:11:09.076 Delete I/O Submission Queue (00h): Supported 00:11:09.076 Create I/O Submission Queue (01h): Supported 00:11:09.076 Get Log Page (02h): Supported 00:11:09.076 Delete I/O Completion Queue (04h): Supported 00:11:09.076 Create I/O Completion Queue (05h): Supported 00:11:09.076 Identify (06h): Supported 00:11:09.076 Abort (08h): Supported 00:11:09.076 Set Features (09h): Supported 00:11:09.076 Get Features (0Ah): Supported 00:11:09.076 Asynchronous Event Request (0Ch): Supported 00:11:09.076 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:09.076 Directive Send (19h): Supported 00:11:09.076 Directive Receive (1Ah): Supported 00:11:09.076 Virtualization Management (1Ch): Supported 00:11:09.076 Doorbell Buffer Config (7Ch): Supported 00:11:09.076 Format NVM (80h): Supported LBA-Change 00:11:09.076 I/O Commands 00:11:09.076 ------------ 00:11:09.076 Flush (00h): Supported LBA-Change 00:11:09.076 Write (01h): Supported LBA-Change 00:11:09.076 Read (02h): Supported 00:11:09.076 Compare (05h): Supported 00:11:09.076 Write Zeroes (08h): Supported LBA-Change 00:11:09.076 Dataset Management (09h): Supported LBA-Change 00:11:09.076 Unknown (0Ch): Supported 00:11:09.076 Unknown (12h): Supported 00:11:09.076 Copy (19h): Supported LBA-Change 00:11:09.076 Unknown (1Dh): Supported LBA-Change 00:11:09.076 00:11:09.076 Error Log 00:11:09.076 ========= 00:11:09.076 00:11:09.076 Arbitration 00:11:09.076 =========== 00:11:09.076 Arbitration Burst: no limit 00:11:09.076 00:11:09.076 Power Management 00:11:09.076 ================ 00:11:09.076 Number of Power States: 1 00:11:09.076 Current Power State: Power State #0 00:11:09.076 Power State #0: 00:11:09.076 Max Power: 25.00 W 00:11:09.076 Non-Operational State: Operational 00:11:09.076 Entry Latency: 16 microseconds 00:11:09.076 Exit Latency: 4 microseconds 00:11:09.076 Relative Read Throughput: 0 00:11:09.076 Relative Read Latency: 0 00:11:09.076 Relative Write Throughput: 0 00:11:09.076 Relative Write Latency: 0 00:11:09.076 Idle Power: Not Reported 00:11:09.076 Active Power: Not Reported 00:11:09.076 Non-Operational Permissive Mode: Not Supported 00:11:09.076 00:11:09.076 Health Information 00:11:09.076 ================== 00:11:09.076 Critical Warnings: 00:11:09.076 Available Spare Space: OK 00:11:09.076 Temperature: [2024-12-10 11:18:35.947541] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64210 terminated unexpected 00:11:09.076 OK 00:11:09.076 Device Reliability: OK 00:11:09.076 Read Only: No 00:11:09.076 Volatile Memory Backup: OK 00:11:09.076 Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.076 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:09.076 Available Spare: 0% 00:11:09.076 Available Spare Threshold: 0% 00:11:09.077 Life Percentage Used: 0% 00:11:09.077 Data Units Read: 1186 00:11:09.077 Data Units Written: 1046 00:11:09.077 Host Read Commands: 54553 00:11:09.077 Host Write Commands: 53238 00:11:09.077 Controller Busy Time: 0 minutes 00:11:09.077 Power Cycles: 0 00:11:09.077 Power On Hours: 0 hours 00:11:09.077 Unsafe Shutdowns: 0 00:11:09.077 Unrecoverable Media Errors: 0 00:11:09.077 Lifetime Error Log Entries: 0 00:11:09.077 Warning Temperature Time: 0 minutes 00:11:09.077 Critical Temperature Time: 0 minutes 00:11:09.077 00:11:09.077 Number of Queues 00:11:09.077 ================ 00:11:09.077 Number of I/O Submission Queues: 64 00:11:09.077 Number of I/O Completion Queues: 64 00:11:09.077 00:11:09.077 ZNS Specific Controller Data 00:11:09.077 ============================ 00:11:09.077 Zone Append Size Limit: 0 00:11:09.077 00:11:09.077 00:11:09.077 Active Namespaces 00:11:09.077 ================= 00:11:09.077 Namespace ID:1 00:11:09.077 Error Recovery Timeout: Unlimited 00:11:09.077 Command Set Identifier: NVM (00h) 00:11:09.077 Deallocate: Supported 00:11:09.077 Deallocated/Unwritten Error: Supported 00:11:09.077 Deallocated Read Value: All 0x00 00:11:09.077 Deallocate in Write Zeroes: Not Supported 00:11:09.077 Deallocated Guard Field: 0xFFFF 00:11:09.077 Flush: Supported 00:11:09.077 Reservation: Not Supported 00:11:09.077 Namespace Sharing Capabilities: Private 00:11:09.077 Size (in LBAs): 1310720 (5GiB) 00:11:09.077 Capacity (in LBAs): 1310720 (5GiB) 00:11:09.077 Utilization (in LBAs): 1310720 (5GiB) 00:11:09.077 Thin Provisioning: Not Supported 00:11:09.077 Per-NS Atomic Units: No 00:11:09.077 Maximum Single Source Range Length: 128 00:11:09.077 Maximum Copy Length: 128 00:11:09.077 Maximum Source Range Count: 128 00:11:09.077 NGUID/EUI64 Never Reused: No 00:11:09.077 Namespace Write Protected: No 00:11:09.077 Number of LBA Formats: 8 00:11:09.077 Current LBA Format: LBA Format #04 00:11:09.077 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:09.077 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:09.077 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:09.077 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:09.077 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:09.077 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:09.077 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:09.077 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:09.077 00:11:09.077 NVM Specific Namespace Data 00:11:09.077 =========================== 00:11:09.077 Logical Block Storage Tag Mask: 0 00:11:09.077 Protection Information Capabilities: 00:11:09.077 16b Guard Protection Information Storage Tag Support: No 00:11:09.077 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:09.077 Storage Tag Check Read Support: No 00:11:09.077 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.077 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.077 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.077 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.077 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.077 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.077 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.077 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.077 ===================================================== 00:11:09.077 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:09.077 ===================================================== 00:11:09.077 Controller Capabilities/Features 00:11:09.077 ================================ 00:11:09.077 Vendor ID: 1b36 00:11:09.077 Subsystem Vendor ID: 1af4 00:11:09.077 Serial Number: 12343 00:11:09.077 Model Number: QEMU NVMe Ctrl 00:11:09.077 Firmware Version: 8.0.0 00:11:09.077 Recommended Arb Burst: 6 00:11:09.077 IEEE OUI Identifier: 00 54 52 00:11:09.077 Multi-path I/O 00:11:09.077 May have multiple subsystem ports: No 00:11:09.077 May have multiple controllers: Yes 00:11:09.077 Associated with SR-IOV VF: No 00:11:09.077 Max Data Transfer Size: 524288 00:11:09.077 Max Number of Namespaces: 256 00:11:09.077 Max Number of I/O Queues: 64 00:11:09.077 NVMe Specification Version (VS): 1.4 00:11:09.077 NVMe Specification Version (Identify): 1.4 00:11:09.077 Maximum Queue Entries: 2048 00:11:09.077 Contiguous Queues Required: Yes 00:11:09.077 Arbitration Mechanisms Supported 00:11:09.077 Weighted Round Robin: Not Supported 00:11:09.077 Vendor Specific: Not Supported 00:11:09.077 Reset Timeout: 7500 ms 00:11:09.077 Doorbell Stride: 4 bytes 00:11:09.077 NVM Subsystem Reset: Not Supported 00:11:09.077 Command Sets Supported 00:11:09.077 NVM Command Set: Supported 00:11:09.077 Boot Partition: Not Supported 00:11:09.077 Memory Page Size Minimum: 4096 bytes 00:11:09.077 Memory Page Size Maximum: 65536 bytes 00:11:09.077 Persistent Memory Region: Not Supported 00:11:09.077 Optional Asynchronous Events Supported 00:11:09.077 Namespace Attribute Notices: Supported 00:11:09.077 Firmware Activation Notices: Not Supported 00:11:09.077 ANA Change Notices: Not Supported 00:11:09.077 PLE Aggregate Log Change Notices: Not Supported 00:11:09.077 LBA Status Info Alert Notices: Not Supported 00:11:09.077 EGE Aggregate Log Change Notices: Not Supported 00:11:09.077 Normal NVM Subsystem Shutdown event: Not Supported 00:11:09.077 Zone Descriptor Change Notices: Not Supported 00:11:09.077 Discovery Log Change Notices: Not Supported 00:11:09.077 Controller Attributes 00:11:09.077 128-bit Host Identifier: Not Supported 00:11:09.077 Non-Operational Permissive Mode: Not Supported 00:11:09.077 NVM Sets: Not Supported 00:11:09.077 Read Recovery Levels: Not Supported 00:11:09.077 Endurance Groups: Supported 00:11:09.077 Predictable Latency Mode: Not Supported 00:11:09.077 Traffic Based Keep ALive: Not Supported 00:11:09.077 Namespace Granularity: Not Supported 00:11:09.077 SQ Associations: Not Supported 00:11:09.077 UUID List: Not Supported 00:11:09.077 Multi-Domain Subsystem: Not Supported 00:11:09.077 Fixed Capacity Management: Not Supported 00:11:09.077 Variable Capacity Management: Not Supported 00:11:09.077 Delete Endurance Group: Not Supported 00:11:09.077 Delete NVM Set: Not Supported 00:11:09.077 Extended LBA Formats Supported: Supported 00:11:09.077 Flexible Data Placement Supported: Supported 00:11:09.077 00:11:09.077 Controller Memory Buffer Support 00:11:09.077 ================================ 00:11:09.077 Supported: No 00:11:09.077 00:11:09.077 Persistent Memory Region Support 00:11:09.077 ================================ 00:11:09.077 Supported: No 00:11:09.077 00:11:09.077 Admin Command Set Attributes 00:11:09.077 ============================ 00:11:09.077 Security Send/Receive: Not Supported 00:11:09.077 Format NVM: Supported 00:11:09.077 Firmware Activate/Download: Not Supported 00:11:09.077 Namespace Management: Supported 00:11:09.077 Device Self-Test: Not Supported 00:11:09.077 Directives: Supported 00:11:09.077 NVMe-MI: Not Supported 00:11:09.077 Virtualization Management: Not Supported 00:11:09.077 Doorbell Buffer Config: Supported 00:11:09.077 Get LBA Status Capability: Not Supported 00:11:09.077 Command & Feature Lockdown Capability: Not Supported 00:11:09.077 Abort Command Limit: 4 00:11:09.077 Async Event Request Limit: 4 00:11:09.077 Number of Firmware Slots: N/A 00:11:09.077 Firmware Slot 1 Read-Only: N/A 00:11:09.077 Firmware Activation Without Reset: N/A 00:11:09.077 Multiple Update Detection Support: N/A 00:11:09.077 Firmware Update Granularity: No Information Provided 00:11:09.077 Per-Namespace SMART Log: Yes 00:11:09.077 Asymmetric Namespace Access Log Page: Not Supported 00:11:09.077 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:09.077 Command Effects Log Page: Supported 00:11:09.077 Get Log Page Extended Data: Supported 00:11:09.077 Telemetry Log Pages: Not Supported 00:11:09.077 Persistent Event Log Pages: Not Supported 00:11:09.077 Supported Log Pages Log Page: May Support 00:11:09.077 Commands Supported & Effects Log Page: Not Supported 00:11:09.077 Feature Identifiers & Effects Log Page:May Support 00:11:09.077 NVMe-MI Commands & Effects Log Page: May Support 00:11:09.077 Data Area 4 for Telemetry Log: Not Supported 00:11:09.077 Error Log Page Entries Supported: 1 00:11:09.077 Keep Alive: Not Supported 00:11:09.077 00:11:09.077 NVM Command Set Attributes 00:11:09.077 ========================== 00:11:09.077 Submission Queue Entry Size 00:11:09.077 Max: 64 00:11:09.077 Min: 64 00:11:09.077 Completion Queue Entry Size 00:11:09.077 Max: 16 00:11:09.077 Min: 16 00:11:09.077 Number of Namespaces: 256 00:11:09.077 Compare Command: Supported 00:11:09.077 Write Uncorrectable Command: Not Supported 00:11:09.077 Dataset Management Command: Supported 00:11:09.077 Write Zeroes Command: Supported 00:11:09.077 Set Features Save Field: Supported 00:11:09.077 Reservations: Not Supported 00:11:09.077 Timestamp: Supported 00:11:09.078 Copy: Supported 00:11:09.078 Volatile Write Cache: Present 00:11:09.078 Atomic Write Unit (Normal): 1 00:11:09.078 Atomic Write Unit (PFail): 1 00:11:09.078 Atomic Compare & Write Unit: 1 00:11:09.078 Fused Compare & Write: Not Supported 00:11:09.078 Scatter-Gather List 00:11:09.078 SGL Command Set: Supported 00:11:09.078 SGL Keyed: Not Supported 00:11:09.078 SGL Bit Bucket Descriptor: Not Supported 00:11:09.078 SGL Metadata Pointer: Not Supported 00:11:09.078 Oversized SGL: Not Supported 00:11:09.078 SGL Metadata Address: Not Supported 00:11:09.078 SGL Offset: Not Supported 00:11:09.078 Transport SGL Data Block: Not Supported 00:11:09.078 Replay Protected Memory Block: Not Supported 00:11:09.078 00:11:09.078 Firmware Slot Information 00:11:09.078 ========================= 00:11:09.078 Active slot: 1 00:11:09.078 Slot 1 Firmware Revision: 1.0 00:11:09.078 00:11:09.078 00:11:09.078 Commands Supported and Effects 00:11:09.078 ============================== 00:11:09.078 Admin Commands 00:11:09.078 -------------- 00:11:09.078 Delete I/O Submission Queue (00h): Supported 00:11:09.078 Create I/O Submission Queue (01h): Supported 00:11:09.078 Get Log Page (02h): Supported 00:11:09.078 Delete I/O Completion Queue (04h): Supported 00:11:09.078 Create I/O Completion Queue (05h): Supported 00:11:09.078 Identify (06h): Supported 00:11:09.078 Abort (08h): Supported 00:11:09.078 Set Features (09h): Supported 00:11:09.078 Get Features (0Ah): Supported 00:11:09.078 Asynchronous Event Request (0Ch): Supported 00:11:09.078 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:09.078 Directive Send (19h): Supported 00:11:09.078 Directive Receive (1Ah): Supported 00:11:09.078 Virtualization Management (1Ch): Supported 00:11:09.078 Doorbell Buffer Config (7Ch): Supported 00:11:09.078 Format NVM (80h): Supported LBA-Change 00:11:09.078 I/O Commands 00:11:09.078 ------------ 00:11:09.078 Flush (00h): Supported LBA-Change 00:11:09.078 Write (01h): Supported LBA-Change 00:11:09.078 Read (02h): Supported 00:11:09.078 Compare (05h): Supported 00:11:09.078 Write Zeroes (08h): Supported LBA-Change 00:11:09.078 Dataset Management (09h): Supported LBA-Change 00:11:09.078 Unknown (0Ch): Supported 00:11:09.078 Unknown (12h): Supported 00:11:09.078 Copy (19h): Supported LBA-Change 00:11:09.078 Unknown (1Dh): Supported LBA-Change 00:11:09.078 00:11:09.078 Error Log 00:11:09.078 ========= 00:11:09.078 00:11:09.078 Arbitration 00:11:09.078 =========== 00:11:09.078 Arbitration Burst: no limit 00:11:09.078 00:11:09.078 Power Management 00:11:09.078 ================ 00:11:09.078 Number of Power States: 1 00:11:09.078 Current Power State: Power State #0 00:11:09.078 Power State #0: 00:11:09.078 Max Power: 25.00 W 00:11:09.078 Non-Operational State: Operational 00:11:09.078 Entry Latency: 16 microseconds 00:11:09.078 Exit Latency: 4 microseconds 00:11:09.078 Relative Read Throughput: 0 00:11:09.078 Relative Read Latency: 0 00:11:09.078 Relative Write Throughput: 0 00:11:09.078 Relative Write Latency: 0 00:11:09.078 Idle Power: Not Reported 00:11:09.078 Active Power: Not Reported 00:11:09.078 Non-Operational Permissive Mode: Not Supported 00:11:09.078 00:11:09.078 Health Information 00:11:09.078 ================== 00:11:09.078 Critical Warnings: 00:11:09.078 Available Spare Space: OK 00:11:09.078 Temperature: OK 00:11:09.078 Device Reliability: OK 00:11:09.078 Read Only: No 00:11:09.078 Volatile Memory Backup: OK 00:11:09.078 Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.078 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:09.078 Available Spare: 0% 00:11:09.078 Available Spare Threshold: 0% 00:11:09.078 Life Percentage Used: 0% 00:11:09.078 Data Units Read: 865 00:11:09.078 Data Units Written: 794 00:11:09.078 Host Read Commands: 37497 00:11:09.078 Host Write Commands: 36920 00:11:09.078 Controller Busy Time: 0 minutes 00:11:09.078 Power Cycles: 0 00:11:09.078 Power On Hours: 0 hours 00:11:09.078 Unsafe Shutdowns: 0 00:11:09.078 Unrecoverable Media Errors: 0 00:11:09.078 Lifetime Error Log Entries: 0 00:11:09.078 Warning Temperature Time: 0 minutes 00:11:09.078 Critical Temperature Time: 0 minutes 00:11:09.078 00:11:09.078 Number of Queues 00:11:09.078 ================ 00:11:09.078 Number of I/O Submission Queues: 64 00:11:09.078 Number of I/O Completion Queues: 64 00:11:09.078 00:11:09.078 ZNS Specific Controller Data 00:11:09.078 ============================ 00:11:09.078 Zone Append Size Limit: 0 00:11:09.078 00:11:09.078 00:11:09.078 Active Namespaces 00:11:09.078 ================= 00:11:09.078 Namespace ID:1 00:11:09.078 Error Recovery Timeout: Unlimited 00:11:09.078 Command Set Identifier: NVM (00h) 00:11:09.078 Deallocate: Supported 00:11:09.078 Deallocated/Unwritten Error: Supported 00:11:09.078 Deallocated Read Value: All 0x00 00:11:09.078 Deallocate in Write Zeroes: Not Supported 00:11:09.078 Deallocated Guard Field: 0xFFFF 00:11:09.078 Flush: Supported 00:11:09.078 Reservation: Not Supported 00:11:09.078 Namespace Sharing Capabilities: Multiple Controllers 00:11:09.078 Size (in LBAs): 262144 (1GiB) 00:11:09.078 Capacity (in LBAs): 262144 (1GiB) 00:11:09.078 Utilization (in LBAs): 262144 (1GiB) 00:11:09.078 Thin Provisioning: Not Supported 00:11:09.078 Per-NS Atomic Units: No 00:11:09.078 Maximum Single Source Range Length: 128 00:11:09.078 Maximum Copy Length: 128 00:11:09.078 Maximum Source Range Count: 128 00:11:09.078 NGUID/EUI64 Never Reused: No 00:11:09.078 Namespace Write Protected: No 00:11:09.078 Endurance group ID: 1 00:11:09.078 Number of LBA Formats: 8 00:11:09.078 Current LBA Format: LBA Format #04 00:11:09.078 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:09.078 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:09.078 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:09.078 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:09.078 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:09.078 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:09.078 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:09.078 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:09.078 00:11:09.078 Get Feature FDP: 00:11:09.078 ================ 00:11:09.078 Enabled: Yes 00:11:09.078 FDP configuration index: 0 00:11:09.078 00:11:09.078 FDP configurations log page 00:11:09.078 =========================== 00:11:09.078 Number of FDP configurations: 1 00:11:09.078 Version: 0 00:11:09.078 Size: 112 00:11:09.078 FDP Configuration Descriptor: 0 00:11:09.078 Descriptor Size: 96 00:11:09.078 Reclaim Group Identifier format: 2 00:11:09.078 FDP Volatile Write Cache: Not Present 00:11:09.078 FDP Configuration: Valid 00:11:09.078 Vendor Specific Size: 0 00:11:09.078 Number of Reclaim Groups: 2 00:11:09.078 Number of Recalim Unit Handles: 8 00:11:09.078 Max Placement Identifiers: 128 00:11:09.078 Number of Namespaces Suppprted: 256 00:11:09.078 Reclaim unit Nominal Size: 6000000 bytes 00:11:09.078 Estimated Reclaim Unit Time Limit: Not Reported 00:11:09.078 RUH Desc #000: RUH Type: Initially Isolated 00:11:09.078 RUH Desc #001: RUH Type: Initially Isolated 00:11:09.078 RUH Desc #002: RUH Type: Initially Isolated 00:11:09.078 RUH Desc #003: RUH Type: Initially Isolated 00:11:09.078 RUH Desc #004: RUH Type: Initially Isolated 00:11:09.078 RUH Desc #005: RUH Type: Initially Isolated 00:11:09.078 RUH Desc #006: RUH Type: Initially Isolated 00:11:09.078 RUH Desc #007: RUH Type: Initially Isolated 00:11:09.078 00:11:09.078 FDP reclaim unit handle usage log page 00:11:09.078 ====================================== 00:11:09.078 Number of Reclaim Unit Handles: 8 00:11:09.078 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:09.078 RUH Usage Desc #001: RUH Attributes: Unused 00:11:09.078 RUH Usage Desc #002: RUH Attributes: Unused 00:11:09.078 RUH Usage Desc #003: RUH Attributes: Unused 00:11:09.078 RUH Usage Desc #004: RUH Attributes: Unused 00:11:09.078 RUH Usage Desc #005: RUH Attributes: Unused 00:11:09.078 RUH Usage Desc #006: RUH Attributes: Unused 00:11:09.078 RUH Usage Desc #007: RUH Attributes: Unused 00:11:09.078 00:11:09.078 FDP statistics log page 00:11:09.078 ======================= 00:11:09.078 Host bytes with metadata written: 512860160 00:11:09.078 Med[2024-12-10 11:18:35.949320] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64210 terminated unexpected 00:11:09.078 ia bytes with metadata written: 512917504 00:11:09.078 Media bytes erased: 0 00:11:09.078 00:11:09.078 FDP events log page 00:11:09.078 =================== 00:11:09.078 Number of FDP events: 0 00:11:09.078 00:11:09.078 NVM Specific Namespace Data 00:11:09.078 =========================== 00:11:09.078 Logical Block Storage Tag Mask: 0 00:11:09.078 Protection Information Capabilities: 00:11:09.078 16b Guard Protection Information Storage Tag Support: No 00:11:09.078 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:09.078 Storage Tag Check Read Support: No 00:11:09.079 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.079 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.079 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.079 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.079 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.079 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.079 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.079 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.079 ===================================================== 00:11:09.079 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:09.079 ===================================================== 00:11:09.079 Controller Capabilities/Features 00:11:09.079 ================================ 00:11:09.079 Vendor ID: 1b36 00:11:09.079 Subsystem Vendor ID: 1af4 00:11:09.079 Serial Number: 12342 00:11:09.079 Model Number: QEMU NVMe Ctrl 00:11:09.079 Firmware Version: 8.0.0 00:11:09.079 Recommended Arb Burst: 6 00:11:09.079 IEEE OUI Identifier: 00 54 52 00:11:09.079 Multi-path I/O 00:11:09.079 May have multiple subsystem ports: No 00:11:09.079 May have multiple controllers: No 00:11:09.079 Associated with SR-IOV VF: No 00:11:09.079 Max Data Transfer Size: 524288 00:11:09.079 Max Number of Namespaces: 256 00:11:09.079 Max Number of I/O Queues: 64 00:11:09.079 NVMe Specification Version (VS): 1.4 00:11:09.079 NVMe Specification Version (Identify): 1.4 00:11:09.079 Maximum Queue Entries: 2048 00:11:09.079 Contiguous Queues Required: Yes 00:11:09.079 Arbitration Mechanisms Supported 00:11:09.079 Weighted Round Robin: Not Supported 00:11:09.079 Vendor Specific: Not Supported 00:11:09.079 Reset Timeout: 7500 ms 00:11:09.079 Doorbell Stride: 4 bytes 00:11:09.079 NVM Subsystem Reset: Not Supported 00:11:09.079 Command Sets Supported 00:11:09.079 NVM Command Set: Supported 00:11:09.079 Boot Partition: Not Supported 00:11:09.079 Memory Page Size Minimum: 4096 bytes 00:11:09.079 Memory Page Size Maximum: 65536 bytes 00:11:09.079 Persistent Memory Region: Not Supported 00:11:09.079 Optional Asynchronous Events Supported 00:11:09.079 Namespace Attribute Notices: Supported 00:11:09.079 Firmware Activation Notices: Not Supported 00:11:09.079 ANA Change Notices: Not Supported 00:11:09.079 PLE Aggregate Log Change Notices: Not Supported 00:11:09.079 LBA Status Info Alert Notices: Not Supported 00:11:09.079 EGE Aggregate Log Change Notices: Not Supported 00:11:09.079 Normal NVM Subsystem Shutdown event: Not Supported 00:11:09.079 Zone Descriptor Change Notices: Not Supported 00:11:09.079 Discovery Log Change Notices: Not Supported 00:11:09.079 Controller Attributes 00:11:09.079 128-bit Host Identifier: Not Supported 00:11:09.079 Non-Operational Permissive Mode: Not Supported 00:11:09.079 NVM Sets: Not Supported 00:11:09.079 Read Recovery Levels: Not Supported 00:11:09.079 Endurance Groups: Not Supported 00:11:09.079 Predictable Latency Mode: Not Supported 00:11:09.079 Traffic Based Keep ALive: Not Supported 00:11:09.079 Namespace Granularity: Not Supported 00:11:09.079 SQ Associations: Not Supported 00:11:09.079 UUID List: Not Supported 00:11:09.079 Multi-Domain Subsystem: Not Supported 00:11:09.079 Fixed Capacity Management: Not Supported 00:11:09.079 Variable Capacity Management: Not Supported 00:11:09.079 Delete Endurance Group: Not Supported 00:11:09.079 Delete NVM Set: Not Supported 00:11:09.079 Extended LBA Formats Supported: Supported 00:11:09.079 Flexible Data Placement Supported: Not Supported 00:11:09.079 00:11:09.079 Controller Memory Buffer Support 00:11:09.079 ================================ 00:11:09.079 Supported: No 00:11:09.079 00:11:09.079 Persistent Memory Region Support 00:11:09.079 ================================ 00:11:09.079 Supported: No 00:11:09.079 00:11:09.079 Admin Command Set Attributes 00:11:09.079 ============================ 00:11:09.079 Security Send/Receive: Not Supported 00:11:09.079 Format NVM: Supported 00:11:09.079 Firmware Activate/Download: Not Supported 00:11:09.079 Namespace Management: Supported 00:11:09.079 Device Self-Test: Not Supported 00:11:09.079 Directives: Supported 00:11:09.079 NVMe-MI: Not Supported 00:11:09.079 Virtualization Management: Not Supported 00:11:09.079 Doorbell Buffer Config: Supported 00:11:09.079 Get LBA Status Capability: Not Supported 00:11:09.079 Command & Feature Lockdown Capability: Not Supported 00:11:09.079 Abort Command Limit: 4 00:11:09.079 Async Event Request Limit: 4 00:11:09.079 Number of Firmware Slots: N/A 00:11:09.079 Firmware Slot 1 Read-Only: N/A 00:11:09.079 Firmware Activation Without Reset: N/A 00:11:09.079 Multiple Update Detection Support: N/A 00:11:09.079 Firmware Update Granularity: No Information Provided 00:11:09.079 Per-Namespace SMART Log: Yes 00:11:09.079 Asymmetric Namespace Access Log Page: Not Supported 00:11:09.079 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:09.079 Command Effects Log Page: Supported 00:11:09.079 Get Log Page Extended Data: Supported 00:11:09.079 Telemetry Log Pages: Not Supported 00:11:09.079 Persistent Event Log Pages: Not Supported 00:11:09.079 Supported Log Pages Log Page: May Support 00:11:09.079 Commands Supported & Effects Log Page: Not Supported 00:11:09.079 Feature Identifiers & Effects Log Page:May Support 00:11:09.079 NVMe-MI Commands & Effects Log Page: May Support 00:11:09.079 Data Area 4 for Telemetry Log: Not Supported 00:11:09.079 Error Log Page Entries Supported: 1 00:11:09.079 Keep Alive: Not Supported 00:11:09.079 00:11:09.079 NVM Command Set Attributes 00:11:09.079 ========================== 00:11:09.079 Submission Queue Entry Size 00:11:09.079 Max: 64 00:11:09.079 Min: 64 00:11:09.079 Completion Queue Entry Size 00:11:09.079 Max: 16 00:11:09.079 Min: 16 00:11:09.079 Number of Namespaces: 256 00:11:09.079 Compare Command: Supported 00:11:09.079 Write Uncorrectable Command: Not Supported 00:11:09.079 Dataset Management Command: Supported 00:11:09.079 Write Zeroes Command: Supported 00:11:09.079 Set Features Save Field: Supported 00:11:09.079 Reservations: Not Supported 00:11:09.079 Timestamp: Supported 00:11:09.079 Copy: Supported 00:11:09.079 Volatile Write Cache: Present 00:11:09.079 Atomic Write Unit (Normal): 1 00:11:09.079 Atomic Write Unit (PFail): 1 00:11:09.079 Atomic Compare & Write Unit: 1 00:11:09.079 Fused Compare & Write: Not Supported 00:11:09.079 Scatter-Gather List 00:11:09.079 SGL Command Set: Supported 00:11:09.079 SGL Keyed: Not Supported 00:11:09.079 SGL Bit Bucket Descriptor: Not Supported 00:11:09.079 SGL Metadata Pointer: Not Supported 00:11:09.079 Oversized SGL: Not Supported 00:11:09.079 SGL Metadata Address: Not Supported 00:11:09.079 SGL Offset: Not Supported 00:11:09.079 Transport SGL Data Block: Not Supported 00:11:09.079 Replay Protected Memory Block: Not Supported 00:11:09.079 00:11:09.079 Firmware Slot Information 00:11:09.079 ========================= 00:11:09.079 Active slot: 1 00:11:09.079 Slot 1 Firmware Revision: 1.0 00:11:09.079 00:11:09.079 00:11:09.079 Commands Supported and Effects 00:11:09.079 ============================== 00:11:09.079 Admin Commands 00:11:09.079 -------------- 00:11:09.079 Delete I/O Submission Queue (00h): Supported 00:11:09.079 Create I/O Submission Queue (01h): Supported 00:11:09.079 Get Log Page (02h): Supported 00:11:09.079 Delete I/O Completion Queue (04h): Supported 00:11:09.079 Create I/O Completion Queue (05h): Supported 00:11:09.079 Identify (06h): Supported 00:11:09.079 Abort (08h): Supported 00:11:09.079 Set Features (09h): Supported 00:11:09.079 Get Features (0Ah): Supported 00:11:09.079 Asynchronous Event Request (0Ch): Supported 00:11:09.080 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:09.080 Directive Send (19h): Supported 00:11:09.080 Directive Receive (1Ah): Supported 00:11:09.080 Virtualization Management (1Ch): Supported 00:11:09.080 Doorbell Buffer Config (7Ch): Supported 00:11:09.080 Format NVM (80h): Supported LBA-Change 00:11:09.080 I/O Commands 00:11:09.080 ------------ 00:11:09.080 Flush (00h): Supported LBA-Change 00:11:09.080 Write (01h): Supported LBA-Change 00:11:09.080 Read (02h): Supported 00:11:09.080 Compare (05h): Supported 00:11:09.080 Write Zeroes (08h): Supported LBA-Change 00:11:09.080 Dataset Management (09h): Supported LBA-Change 00:11:09.080 Unknown (0Ch): Supported 00:11:09.080 Unknown (12h): Supported 00:11:09.080 Copy (19h): Supported LBA-Change 00:11:09.080 Unknown (1Dh): Supported LBA-Change 00:11:09.080 00:11:09.080 Error Log 00:11:09.080 ========= 00:11:09.080 00:11:09.080 Arbitration 00:11:09.080 =========== 00:11:09.080 Arbitration Burst: no limit 00:11:09.080 00:11:09.080 Power Management 00:11:09.080 ================ 00:11:09.080 Number of Power States: 1 00:11:09.080 Current Power State: Power State #0 00:11:09.080 Power State #0: 00:11:09.080 Max Power: 25.00 W 00:11:09.080 Non-Operational State: Operational 00:11:09.080 Entry Latency: 16 microseconds 00:11:09.080 Exit Latency: 4 microseconds 00:11:09.080 Relative Read Throughput: 0 00:11:09.080 Relative Read Latency: 0 00:11:09.080 Relative Write Throughput: 0 00:11:09.080 Relative Write Latency: 0 00:11:09.080 Idle Power: Not Reported 00:11:09.080 Active Power: Not Reported 00:11:09.080 Non-Operational Permissive Mode: Not Supported 00:11:09.080 00:11:09.080 Health Information 00:11:09.080 ================== 00:11:09.080 Critical Warnings: 00:11:09.080 Available Spare Space: OK 00:11:09.080 Temperature: OK 00:11:09.080 Device Reliability: OK 00:11:09.080 Read Only: No 00:11:09.080 Volatile Memory Backup: OK 00:11:09.080 Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.080 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:09.080 Available Spare: 0% 00:11:09.080 Available Spare Threshold: 0% 00:11:09.080 Life Percentage Used: 0% 00:11:09.080 Data Units Read: 2387 00:11:09.080 Data Units Written: 2174 00:11:09.080 Host Read Commands: 110747 00:11:09.080 Host Write Commands: 109016 00:11:09.080 Controller Busy Time: 0 minutes 00:11:09.080 Power Cycles: 0 00:11:09.080 Power On Hours: 0 hours 00:11:09.080 Unsafe Shutdowns: 0 00:11:09.080 Unrecoverable Media Errors: 0 00:11:09.080 Lifetime Error Log Entries: 0 00:11:09.080 Warning Temperature Time: 0 minutes 00:11:09.080 Critical Temperature Time: 0 minutes 00:11:09.080 00:11:09.080 Number of Queues 00:11:09.080 ================ 00:11:09.080 Number of I/O Submission Queues: 64 00:11:09.080 Number of I/O Completion Queues: 64 00:11:09.080 00:11:09.080 ZNS Specific Controller Data 00:11:09.080 ============================ 00:11:09.080 Zone Append Size Limit: 0 00:11:09.080 00:11:09.080 00:11:09.080 Active Namespaces 00:11:09.080 ================= 00:11:09.080 Namespace ID:1 00:11:09.080 Error Recovery Timeout: Unlimited 00:11:09.080 Command Set Identifier: NVM (00h) 00:11:09.080 Deallocate: Supported 00:11:09.080 Deallocated/Unwritten Error: Supported 00:11:09.080 Deallocated Read Value: All 0x00 00:11:09.080 Deallocate in Write Zeroes: Not Supported 00:11:09.080 Deallocated Guard Field: 0xFFFF 00:11:09.080 Flush: Supported 00:11:09.080 Reservation: Not Supported 00:11:09.080 Namespace Sharing Capabilities: Private 00:11:09.080 Size (in LBAs): 1048576 (4GiB) 00:11:09.080 Capacity (in LBAs): 1048576 (4GiB) 00:11:09.080 Utilization (in LBAs): 1048576 (4GiB) 00:11:09.080 Thin Provisioning: Not Supported 00:11:09.080 Per-NS Atomic Units: No 00:11:09.080 Maximum Single Source Range Length: 128 00:11:09.080 Maximum Copy Length: 128 00:11:09.080 Maximum Source Range Count: 128 00:11:09.080 NGUID/EUI64 Never Reused: No 00:11:09.080 Namespace Write Protected: No 00:11:09.080 Number of LBA Formats: 8 00:11:09.080 Current LBA Format: LBA Format #04 00:11:09.080 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:09.080 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:09.080 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:09.080 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:09.080 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:09.080 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:09.080 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:09.080 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:09.080 00:11:09.080 NVM Specific Namespace Data 00:11:09.080 =========================== 00:11:09.080 Logical Block Storage Tag Mask: 0 00:11:09.080 Protection Information Capabilities: 00:11:09.080 16b Guard Protection Information Storage Tag Support: No 00:11:09.080 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:09.080 Storage Tag Check Read Support: No 00:11:09.080 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Namespace ID:2 00:11:09.080 Error Recovery Timeout: Unlimited 00:11:09.080 Command Set Identifier: NVM (00h) 00:11:09.080 Deallocate: Supported 00:11:09.080 Deallocated/Unwritten Error: Supported 00:11:09.080 Deallocated Read Value: All 0x00 00:11:09.080 Deallocate in Write Zeroes: Not Supported 00:11:09.080 Deallocated Guard Field: 0xFFFF 00:11:09.080 Flush: Supported 00:11:09.080 Reservation: Not Supported 00:11:09.080 Namespace Sharing Capabilities: Private 00:11:09.080 Size (in LBAs): 1048576 (4GiB) 00:11:09.080 Capacity (in LBAs): 1048576 (4GiB) 00:11:09.080 Utilization (in LBAs): 1048576 (4GiB) 00:11:09.080 Thin Provisioning: Not Supported 00:11:09.080 Per-NS Atomic Units: No 00:11:09.080 Maximum Single Source Range Length: 128 00:11:09.080 Maximum Copy Length: 128 00:11:09.080 Maximum Source Range Count: 128 00:11:09.080 NGUID/EUI64 Never Reused: No 00:11:09.080 Namespace Write Protected: No 00:11:09.080 Number of LBA Formats: 8 00:11:09.080 Current LBA Format: LBA Format #04 00:11:09.080 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:09.080 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:09.080 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:09.080 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:09.080 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:09.080 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:09.080 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:09.080 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:09.080 00:11:09.080 NVM Specific Namespace Data 00:11:09.080 =========================== 00:11:09.080 Logical Block Storage Tag Mask: 0 00:11:09.080 Protection Information Capabilities: 00:11:09.080 16b Guard Protection Information Storage Tag Support: No 00:11:09.080 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:09.080 Storage Tag Check Read Support: No 00:11:09.080 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.080 Namespace ID:3 00:11:09.080 Error Recovery Timeout: Unlimited 00:11:09.080 Command Set Identifier: NVM (00h) 00:11:09.080 Deallocate: Supported 00:11:09.080 Deallocated/Unwritten Error: Supported 00:11:09.080 Deallocated Read Value: All 0x00 00:11:09.080 Deallocate in Write Zeroes: Not Supported 00:11:09.080 Deallocated Guard Field: 0xFFFF 00:11:09.080 Flush: Supported 00:11:09.080 Reservation: Not Supported 00:11:09.080 Namespace Sharing Capabilities: Private 00:11:09.080 Size (in LBAs): 1048576 (4GiB) 00:11:09.080 Capacity (in LBAs): 1048576 (4GiB) 00:11:09.080 Utilization (in LBAs): 1048576 (4GiB) 00:11:09.080 Thin Provisioning: Not Supported 00:11:09.080 Per-NS Atomic Units: No 00:11:09.080 Maximum Single Source Range Length: 128 00:11:09.080 Maximum Copy Length: 128 00:11:09.080 Maximum Source Range Count: 128 00:11:09.080 NGUID/EUI64 Never Reused: No 00:11:09.080 Namespace Write Protected: No 00:11:09.081 Number of LBA Formats: 8 00:11:09.081 Current LBA Format: LBA Format #04 00:11:09.081 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:09.081 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:09.081 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:09.081 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:09.081 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:09.081 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:09.081 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:09.081 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:09.081 00:11:09.081 NVM Specific Namespace Data 00:11:09.081 =========================== 00:11:09.081 Logical Block Storage Tag Mask: 0 00:11:09.081 Protection Information Capabilities: 00:11:09.081 16b Guard Protection Information Storage Tag Support: No 00:11:09.081 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:09.081 Storage Tag Check Read Support: No 00:11:09.081 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.081 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.081 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.081 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.081 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.081 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.081 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.081 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.081 11:18:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:09.081 11:18:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:11:09.340 ===================================================== 00:11:09.340 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:09.340 ===================================================== 00:11:09.340 Controller Capabilities/Features 00:11:09.340 ================================ 00:11:09.340 Vendor ID: 1b36 00:11:09.340 Subsystem Vendor ID: 1af4 00:11:09.340 Serial Number: 12340 00:11:09.340 Model Number: QEMU NVMe Ctrl 00:11:09.340 Firmware Version: 8.0.0 00:11:09.340 Recommended Arb Burst: 6 00:11:09.340 IEEE OUI Identifier: 00 54 52 00:11:09.340 Multi-path I/O 00:11:09.340 May have multiple subsystem ports: No 00:11:09.340 May have multiple controllers: No 00:11:09.340 Associated with SR-IOV VF: No 00:11:09.340 Max Data Transfer Size: 524288 00:11:09.340 Max Number of Namespaces: 256 00:11:09.340 Max Number of I/O Queues: 64 00:11:09.340 NVMe Specification Version (VS): 1.4 00:11:09.340 NVMe Specification Version (Identify): 1.4 00:11:09.340 Maximum Queue Entries: 2048 00:11:09.340 Contiguous Queues Required: Yes 00:11:09.340 Arbitration Mechanisms Supported 00:11:09.340 Weighted Round Robin: Not Supported 00:11:09.340 Vendor Specific: Not Supported 00:11:09.340 Reset Timeout: 7500 ms 00:11:09.340 Doorbell Stride: 4 bytes 00:11:09.340 NVM Subsystem Reset: Not Supported 00:11:09.340 Command Sets Supported 00:11:09.340 NVM Command Set: Supported 00:11:09.340 Boot Partition: Not Supported 00:11:09.340 Memory Page Size Minimum: 4096 bytes 00:11:09.340 Memory Page Size Maximum: 65536 bytes 00:11:09.340 Persistent Memory Region: Not Supported 00:11:09.340 Optional Asynchronous Events Supported 00:11:09.340 Namespace Attribute Notices: Supported 00:11:09.340 Firmware Activation Notices: Not Supported 00:11:09.340 ANA Change Notices: Not Supported 00:11:09.340 PLE Aggregate Log Change Notices: Not Supported 00:11:09.340 LBA Status Info Alert Notices: Not Supported 00:11:09.340 EGE Aggregate Log Change Notices: Not Supported 00:11:09.340 Normal NVM Subsystem Shutdown event: Not Supported 00:11:09.340 Zone Descriptor Change Notices: Not Supported 00:11:09.340 Discovery Log Change Notices: Not Supported 00:11:09.340 Controller Attributes 00:11:09.340 128-bit Host Identifier: Not Supported 00:11:09.340 Non-Operational Permissive Mode: Not Supported 00:11:09.340 NVM Sets: Not Supported 00:11:09.340 Read Recovery Levels: Not Supported 00:11:09.340 Endurance Groups: Not Supported 00:11:09.340 Predictable Latency Mode: Not Supported 00:11:09.340 Traffic Based Keep ALive: Not Supported 00:11:09.340 Namespace Granularity: Not Supported 00:11:09.340 SQ Associations: Not Supported 00:11:09.340 UUID List: Not Supported 00:11:09.340 Multi-Domain Subsystem: Not Supported 00:11:09.340 Fixed Capacity Management: Not Supported 00:11:09.340 Variable Capacity Management: Not Supported 00:11:09.340 Delete Endurance Group: Not Supported 00:11:09.340 Delete NVM Set: Not Supported 00:11:09.340 Extended LBA Formats Supported: Supported 00:11:09.340 Flexible Data Placement Supported: Not Supported 00:11:09.340 00:11:09.340 Controller Memory Buffer Support 00:11:09.340 ================================ 00:11:09.340 Supported: No 00:11:09.340 00:11:09.340 Persistent Memory Region Support 00:11:09.340 ================================ 00:11:09.340 Supported: No 00:11:09.340 00:11:09.340 Admin Command Set Attributes 00:11:09.340 ============================ 00:11:09.340 Security Send/Receive: Not Supported 00:11:09.340 Format NVM: Supported 00:11:09.340 Firmware Activate/Download: Not Supported 00:11:09.340 Namespace Management: Supported 00:11:09.340 Device Self-Test: Not Supported 00:11:09.340 Directives: Supported 00:11:09.340 NVMe-MI: Not Supported 00:11:09.340 Virtualization Management: Not Supported 00:11:09.340 Doorbell Buffer Config: Supported 00:11:09.340 Get LBA Status Capability: Not Supported 00:11:09.340 Command & Feature Lockdown Capability: Not Supported 00:11:09.340 Abort Command Limit: 4 00:11:09.340 Async Event Request Limit: 4 00:11:09.340 Number of Firmware Slots: N/A 00:11:09.340 Firmware Slot 1 Read-Only: N/A 00:11:09.340 Firmware Activation Without Reset: N/A 00:11:09.340 Multiple Update Detection Support: N/A 00:11:09.340 Firmware Update Granularity: No Information Provided 00:11:09.340 Per-Namespace SMART Log: Yes 00:11:09.340 Asymmetric Namespace Access Log Page: Not Supported 00:11:09.340 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:09.340 Command Effects Log Page: Supported 00:11:09.340 Get Log Page Extended Data: Supported 00:11:09.340 Telemetry Log Pages: Not Supported 00:11:09.340 Persistent Event Log Pages: Not Supported 00:11:09.340 Supported Log Pages Log Page: May Support 00:11:09.340 Commands Supported & Effects Log Page: Not Supported 00:11:09.340 Feature Identifiers & Effects Log Page:May Support 00:11:09.340 NVMe-MI Commands & Effects Log Page: May Support 00:11:09.340 Data Area 4 for Telemetry Log: Not Supported 00:11:09.340 Error Log Page Entries Supported: 1 00:11:09.340 Keep Alive: Not Supported 00:11:09.340 00:11:09.340 NVM Command Set Attributes 00:11:09.340 ========================== 00:11:09.340 Submission Queue Entry Size 00:11:09.340 Max: 64 00:11:09.340 Min: 64 00:11:09.340 Completion Queue Entry Size 00:11:09.340 Max: 16 00:11:09.340 Min: 16 00:11:09.340 Number of Namespaces: 256 00:11:09.340 Compare Command: Supported 00:11:09.340 Write Uncorrectable Command: Not Supported 00:11:09.340 Dataset Management Command: Supported 00:11:09.340 Write Zeroes Command: Supported 00:11:09.340 Set Features Save Field: Supported 00:11:09.340 Reservations: Not Supported 00:11:09.340 Timestamp: Supported 00:11:09.340 Copy: Supported 00:11:09.340 Volatile Write Cache: Present 00:11:09.340 Atomic Write Unit (Normal): 1 00:11:09.340 Atomic Write Unit (PFail): 1 00:11:09.340 Atomic Compare & Write Unit: 1 00:11:09.340 Fused Compare & Write: Not Supported 00:11:09.340 Scatter-Gather List 00:11:09.340 SGL Command Set: Supported 00:11:09.340 SGL Keyed: Not Supported 00:11:09.340 SGL Bit Bucket Descriptor: Not Supported 00:11:09.340 SGL Metadata Pointer: Not Supported 00:11:09.340 Oversized SGL: Not Supported 00:11:09.340 SGL Metadata Address: Not Supported 00:11:09.340 SGL Offset: Not Supported 00:11:09.340 Transport SGL Data Block: Not Supported 00:11:09.340 Replay Protected Memory Block: Not Supported 00:11:09.340 00:11:09.340 Firmware Slot Information 00:11:09.340 ========================= 00:11:09.340 Active slot: 1 00:11:09.340 Slot 1 Firmware Revision: 1.0 00:11:09.340 00:11:09.340 00:11:09.340 Commands Supported and Effects 00:11:09.340 ============================== 00:11:09.340 Admin Commands 00:11:09.340 -------------- 00:11:09.340 Delete I/O Submission Queue (00h): Supported 00:11:09.340 Create I/O Submission Queue (01h): Supported 00:11:09.340 Get Log Page (02h): Supported 00:11:09.340 Delete I/O Completion Queue (04h): Supported 00:11:09.340 Create I/O Completion Queue (05h): Supported 00:11:09.340 Identify (06h): Supported 00:11:09.340 Abort (08h): Supported 00:11:09.340 Set Features (09h): Supported 00:11:09.340 Get Features (0Ah): Supported 00:11:09.340 Asynchronous Event Request (0Ch): Supported 00:11:09.340 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:09.340 Directive Send (19h): Supported 00:11:09.340 Directive Receive (1Ah): Supported 00:11:09.340 Virtualization Management (1Ch): Supported 00:11:09.340 Doorbell Buffer Config (7Ch): Supported 00:11:09.340 Format NVM (80h): Supported LBA-Change 00:11:09.340 I/O Commands 00:11:09.340 ------------ 00:11:09.340 Flush (00h): Supported LBA-Change 00:11:09.340 Write (01h): Supported LBA-Change 00:11:09.340 Read (02h): Supported 00:11:09.340 Compare (05h): Supported 00:11:09.340 Write Zeroes (08h): Supported LBA-Change 00:11:09.340 Dataset Management (09h): Supported LBA-Change 00:11:09.340 Unknown (0Ch): Supported 00:11:09.340 Unknown (12h): Supported 00:11:09.340 Copy (19h): Supported LBA-Change 00:11:09.340 Unknown (1Dh): Supported LBA-Change 00:11:09.340 00:11:09.340 Error Log 00:11:09.340 ========= 00:11:09.340 00:11:09.340 Arbitration 00:11:09.340 =========== 00:11:09.340 Arbitration Burst: no limit 00:11:09.340 00:11:09.340 Power Management 00:11:09.340 ================ 00:11:09.340 Number of Power States: 1 00:11:09.341 Current Power State: Power State #0 00:11:09.341 Power State #0: 00:11:09.341 Max Power: 25.00 W 00:11:09.341 Non-Operational State: Operational 00:11:09.341 Entry Latency: 16 microseconds 00:11:09.341 Exit Latency: 4 microseconds 00:11:09.341 Relative Read Throughput: 0 00:11:09.341 Relative Read Latency: 0 00:11:09.341 Relative Write Throughput: 0 00:11:09.341 Relative Write Latency: 0 00:11:09.341 Idle Power: Not Reported 00:11:09.341 Active Power: Not Reported 00:11:09.341 Non-Operational Permissive Mode: Not Supported 00:11:09.341 00:11:09.341 Health Information 00:11:09.341 ================== 00:11:09.341 Critical Warnings: 00:11:09.341 Available Spare Space: OK 00:11:09.341 Temperature: OK 00:11:09.341 Device Reliability: OK 00:11:09.341 Read Only: No 00:11:09.341 Volatile Memory Backup: OK 00:11:09.341 Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.341 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:09.341 Available Spare: 0% 00:11:09.341 Available Spare Threshold: 0% 00:11:09.341 Life Percentage Used: 0% 00:11:09.341 Data Units Read: 766 00:11:09.341 Data Units Written: 695 00:11:09.341 Host Read Commands: 36367 00:11:09.341 Host Write Commands: 36153 00:11:09.341 Controller Busy Time: 0 minutes 00:11:09.341 Power Cycles: 0 00:11:09.341 Power On Hours: 0 hours 00:11:09.341 Unsafe Shutdowns: 0 00:11:09.341 Unrecoverable Media Errors: 0 00:11:09.341 Lifetime Error Log Entries: 0 00:11:09.341 Warning Temperature Time: 0 minutes 00:11:09.341 Critical Temperature Time: 0 minutes 00:11:09.341 00:11:09.341 Number of Queues 00:11:09.341 ================ 00:11:09.341 Number of I/O Submission Queues: 64 00:11:09.341 Number of I/O Completion Queues: 64 00:11:09.341 00:11:09.341 ZNS Specific Controller Data 00:11:09.341 ============================ 00:11:09.341 Zone Append Size Limit: 0 00:11:09.341 00:11:09.341 00:11:09.341 Active Namespaces 00:11:09.341 ================= 00:11:09.341 Namespace ID:1 00:11:09.341 Error Recovery Timeout: Unlimited 00:11:09.341 Command Set Identifier: NVM (00h) 00:11:09.341 Deallocate: Supported 00:11:09.341 Deallocated/Unwritten Error: Supported 00:11:09.341 Deallocated Read Value: All 0x00 00:11:09.341 Deallocate in Write Zeroes: Not Supported 00:11:09.341 Deallocated Guard Field: 0xFFFF 00:11:09.341 Flush: Supported 00:11:09.341 Reservation: Not Supported 00:11:09.341 Metadata Transferred as: Separate Metadata Buffer 00:11:09.341 Namespace Sharing Capabilities: Private 00:11:09.341 Size (in LBAs): 1548666 (5GiB) 00:11:09.341 Capacity (in LBAs): 1548666 (5GiB) 00:11:09.341 Utilization (in LBAs): 1548666 (5GiB) 00:11:09.341 Thin Provisioning: Not Supported 00:11:09.341 Per-NS Atomic Units: No 00:11:09.341 Maximum Single Source Range Length: 128 00:11:09.341 Maximum Copy Length: 128 00:11:09.341 Maximum Source Range Count: 128 00:11:09.341 NGUID/EUI64 Never Reused: No 00:11:09.341 Namespace Write Protected: No 00:11:09.341 Number of LBA Formats: 8 00:11:09.341 Current LBA Format: LBA Format #07 00:11:09.341 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:09.341 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:09.341 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:09.341 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:09.341 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:09.341 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:09.341 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:09.341 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:09.341 00:11:09.341 NVM Specific Namespace Data 00:11:09.341 =========================== 00:11:09.341 Logical Block Storage Tag Mask: 0 00:11:09.341 Protection Information Capabilities: 00:11:09.341 16b Guard Protection Information Storage Tag Support: No 00:11:09.341 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:09.341 Storage Tag Check Read Support: No 00:11:09.341 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.341 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.341 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.341 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.341 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.341 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.341 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.341 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.341 11:18:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:09.341 11:18:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:11:09.600 ===================================================== 00:11:09.600 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:09.600 ===================================================== 00:11:09.600 Controller Capabilities/Features 00:11:09.600 ================================ 00:11:09.600 Vendor ID: 1b36 00:11:09.600 Subsystem Vendor ID: 1af4 00:11:09.600 Serial Number: 12341 00:11:09.600 Model Number: QEMU NVMe Ctrl 00:11:09.600 Firmware Version: 8.0.0 00:11:09.600 Recommended Arb Burst: 6 00:11:09.600 IEEE OUI Identifier: 00 54 52 00:11:09.600 Multi-path I/O 00:11:09.600 May have multiple subsystem ports: No 00:11:09.600 May have multiple controllers: No 00:11:09.600 Associated with SR-IOV VF: No 00:11:09.600 Max Data Transfer Size: 524288 00:11:09.600 Max Number of Namespaces: 256 00:11:09.600 Max Number of I/O Queues: 64 00:11:09.600 NVMe Specification Version (VS): 1.4 00:11:09.600 NVMe Specification Version (Identify): 1.4 00:11:09.600 Maximum Queue Entries: 2048 00:11:09.600 Contiguous Queues Required: Yes 00:11:09.600 Arbitration Mechanisms Supported 00:11:09.600 Weighted Round Robin: Not Supported 00:11:09.600 Vendor Specific: Not Supported 00:11:09.600 Reset Timeout: 7500 ms 00:11:09.600 Doorbell Stride: 4 bytes 00:11:09.600 NVM Subsystem Reset: Not Supported 00:11:09.600 Command Sets Supported 00:11:09.600 NVM Command Set: Supported 00:11:09.600 Boot Partition: Not Supported 00:11:09.600 Memory Page Size Minimum: 4096 bytes 00:11:09.600 Memory Page Size Maximum: 65536 bytes 00:11:09.600 Persistent Memory Region: Not Supported 00:11:09.600 Optional Asynchronous Events Supported 00:11:09.600 Namespace Attribute Notices: Supported 00:11:09.600 Firmware Activation Notices: Not Supported 00:11:09.600 ANA Change Notices: Not Supported 00:11:09.600 PLE Aggregate Log Change Notices: Not Supported 00:11:09.600 LBA Status Info Alert Notices: Not Supported 00:11:09.600 EGE Aggregate Log Change Notices: Not Supported 00:11:09.600 Normal NVM Subsystem Shutdown event: Not Supported 00:11:09.600 Zone Descriptor Change Notices: Not Supported 00:11:09.600 Discovery Log Change Notices: Not Supported 00:11:09.600 Controller Attributes 00:11:09.600 128-bit Host Identifier: Not Supported 00:11:09.600 Non-Operational Permissive Mode: Not Supported 00:11:09.600 NVM Sets: Not Supported 00:11:09.600 Read Recovery Levels: Not Supported 00:11:09.600 Endurance Groups: Not Supported 00:11:09.600 Predictable Latency Mode: Not Supported 00:11:09.600 Traffic Based Keep ALive: Not Supported 00:11:09.600 Namespace Granularity: Not Supported 00:11:09.600 SQ Associations: Not Supported 00:11:09.600 UUID List: Not Supported 00:11:09.600 Multi-Domain Subsystem: Not Supported 00:11:09.600 Fixed Capacity Management: Not Supported 00:11:09.600 Variable Capacity Management: Not Supported 00:11:09.600 Delete Endurance Group: Not Supported 00:11:09.600 Delete NVM Set: Not Supported 00:11:09.600 Extended LBA Formats Supported: Supported 00:11:09.600 Flexible Data Placement Supported: Not Supported 00:11:09.600 00:11:09.600 Controller Memory Buffer Support 00:11:09.600 ================================ 00:11:09.600 Supported: No 00:11:09.600 00:11:09.600 Persistent Memory Region Support 00:11:09.600 ================================ 00:11:09.600 Supported: No 00:11:09.600 00:11:09.600 Admin Command Set Attributes 00:11:09.600 ============================ 00:11:09.600 Security Send/Receive: Not Supported 00:11:09.600 Format NVM: Supported 00:11:09.600 Firmware Activate/Download: Not Supported 00:11:09.601 Namespace Management: Supported 00:11:09.601 Device Self-Test: Not Supported 00:11:09.601 Directives: Supported 00:11:09.601 NVMe-MI: Not Supported 00:11:09.601 Virtualization Management: Not Supported 00:11:09.601 Doorbell Buffer Config: Supported 00:11:09.601 Get LBA Status Capability: Not Supported 00:11:09.601 Command & Feature Lockdown Capability: Not Supported 00:11:09.601 Abort Command Limit: 4 00:11:09.601 Async Event Request Limit: 4 00:11:09.601 Number of Firmware Slots: N/A 00:11:09.601 Firmware Slot 1 Read-Only: N/A 00:11:09.601 Firmware Activation Without Reset: N/A 00:11:09.601 Multiple Update Detection Support: N/A 00:11:09.601 Firmware Update Granularity: No Information Provided 00:11:09.601 Per-Namespace SMART Log: Yes 00:11:09.601 Asymmetric Namespace Access Log Page: Not Supported 00:11:09.601 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:09.601 Command Effects Log Page: Supported 00:11:09.601 Get Log Page Extended Data: Supported 00:11:09.601 Telemetry Log Pages: Not Supported 00:11:09.601 Persistent Event Log Pages: Not Supported 00:11:09.601 Supported Log Pages Log Page: May Support 00:11:09.601 Commands Supported & Effects Log Page: Not Supported 00:11:09.601 Feature Identifiers & Effects Log Page:May Support 00:11:09.601 NVMe-MI Commands & Effects Log Page: May Support 00:11:09.601 Data Area 4 for Telemetry Log: Not Supported 00:11:09.601 Error Log Page Entries Supported: 1 00:11:09.601 Keep Alive: Not Supported 00:11:09.601 00:11:09.601 NVM Command Set Attributes 00:11:09.601 ========================== 00:11:09.601 Submission Queue Entry Size 00:11:09.601 Max: 64 00:11:09.601 Min: 64 00:11:09.601 Completion Queue Entry Size 00:11:09.601 Max: 16 00:11:09.601 Min: 16 00:11:09.601 Number of Namespaces: 256 00:11:09.601 Compare Command: Supported 00:11:09.601 Write Uncorrectable Command: Not Supported 00:11:09.601 Dataset Management Command: Supported 00:11:09.601 Write Zeroes Command: Supported 00:11:09.601 Set Features Save Field: Supported 00:11:09.601 Reservations: Not Supported 00:11:09.601 Timestamp: Supported 00:11:09.601 Copy: Supported 00:11:09.601 Volatile Write Cache: Present 00:11:09.601 Atomic Write Unit (Normal): 1 00:11:09.601 Atomic Write Unit (PFail): 1 00:11:09.601 Atomic Compare & Write Unit: 1 00:11:09.601 Fused Compare & Write: Not Supported 00:11:09.601 Scatter-Gather List 00:11:09.601 SGL Command Set: Supported 00:11:09.601 SGL Keyed: Not Supported 00:11:09.601 SGL Bit Bucket Descriptor: Not Supported 00:11:09.601 SGL Metadata Pointer: Not Supported 00:11:09.601 Oversized SGL: Not Supported 00:11:09.601 SGL Metadata Address: Not Supported 00:11:09.601 SGL Offset: Not Supported 00:11:09.601 Transport SGL Data Block: Not Supported 00:11:09.601 Replay Protected Memory Block: Not Supported 00:11:09.601 00:11:09.601 Firmware Slot Information 00:11:09.601 ========================= 00:11:09.601 Active slot: 1 00:11:09.601 Slot 1 Firmware Revision: 1.0 00:11:09.601 00:11:09.601 00:11:09.601 Commands Supported and Effects 00:11:09.601 ============================== 00:11:09.601 Admin Commands 00:11:09.601 -------------- 00:11:09.601 Delete I/O Submission Queue (00h): Supported 00:11:09.601 Create I/O Submission Queue (01h): Supported 00:11:09.601 Get Log Page (02h): Supported 00:11:09.601 Delete I/O Completion Queue (04h): Supported 00:11:09.601 Create I/O Completion Queue (05h): Supported 00:11:09.601 Identify (06h): Supported 00:11:09.601 Abort (08h): Supported 00:11:09.601 Set Features (09h): Supported 00:11:09.601 Get Features (0Ah): Supported 00:11:09.601 Asynchronous Event Request (0Ch): Supported 00:11:09.601 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:09.601 Directive Send (19h): Supported 00:11:09.601 Directive Receive (1Ah): Supported 00:11:09.601 Virtualization Management (1Ch): Supported 00:11:09.601 Doorbell Buffer Config (7Ch): Supported 00:11:09.601 Format NVM (80h): Supported LBA-Change 00:11:09.601 I/O Commands 00:11:09.601 ------------ 00:11:09.601 Flush (00h): Supported LBA-Change 00:11:09.601 Write (01h): Supported LBA-Change 00:11:09.601 Read (02h): Supported 00:11:09.601 Compare (05h): Supported 00:11:09.601 Write Zeroes (08h): Supported LBA-Change 00:11:09.601 Dataset Management (09h): Supported LBA-Change 00:11:09.601 Unknown (0Ch): Supported 00:11:09.601 Unknown (12h): Supported 00:11:09.601 Copy (19h): Supported LBA-Change 00:11:09.601 Unknown (1Dh): Supported LBA-Change 00:11:09.601 00:11:09.601 Error Log 00:11:09.601 ========= 00:11:09.601 00:11:09.601 Arbitration 00:11:09.601 =========== 00:11:09.601 Arbitration Burst: no limit 00:11:09.601 00:11:09.601 Power Management 00:11:09.601 ================ 00:11:09.601 Number of Power States: 1 00:11:09.601 Current Power State: Power State #0 00:11:09.601 Power State #0: 00:11:09.601 Max Power: 25.00 W 00:11:09.601 Non-Operational State: Operational 00:11:09.601 Entry Latency: 16 microseconds 00:11:09.601 Exit Latency: 4 microseconds 00:11:09.601 Relative Read Throughput: 0 00:11:09.601 Relative Read Latency: 0 00:11:09.601 Relative Write Throughput: 0 00:11:09.601 Relative Write Latency: 0 00:11:09.601 Idle Power: Not Reported 00:11:09.601 Active Power: Not Reported 00:11:09.601 Non-Operational Permissive Mode: Not Supported 00:11:09.601 00:11:09.601 Health Information 00:11:09.601 ================== 00:11:09.601 Critical Warnings: 00:11:09.601 Available Spare Space: OK 00:11:09.601 Temperature: OK 00:11:09.601 Device Reliability: OK 00:11:09.601 Read Only: No 00:11:09.601 Volatile Memory Backup: OK 00:11:09.601 Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.601 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:09.601 Available Spare: 0% 00:11:09.601 Available Spare Threshold: 0% 00:11:09.601 Life Percentage Used: 0% 00:11:09.601 Data Units Read: 1186 00:11:09.601 Data Units Written: 1046 00:11:09.601 Host Read Commands: 54553 00:11:09.601 Host Write Commands: 53238 00:11:09.601 Controller Busy Time: 0 minutes 00:11:09.601 Power Cycles: 0 00:11:09.601 Power On Hours: 0 hours 00:11:09.601 Unsafe Shutdowns: 0 00:11:09.601 Unrecoverable Media Errors: 0 00:11:09.601 Lifetime Error Log Entries: 0 00:11:09.601 Warning Temperature Time: 0 minutes 00:11:09.601 Critical Temperature Time: 0 minutes 00:11:09.601 00:11:09.601 Number of Queues 00:11:09.601 ================ 00:11:09.601 Number of I/O Submission Queues: 64 00:11:09.601 Number of I/O Completion Queues: 64 00:11:09.601 00:11:09.601 ZNS Specific Controller Data 00:11:09.601 ============================ 00:11:09.601 Zone Append Size Limit: 0 00:11:09.601 00:11:09.601 00:11:09.601 Active Namespaces 00:11:09.601 ================= 00:11:09.601 Namespace ID:1 00:11:09.601 Error Recovery Timeout: Unlimited 00:11:09.601 Command Set Identifier: NVM (00h) 00:11:09.601 Deallocate: Supported 00:11:09.601 Deallocated/Unwritten Error: Supported 00:11:09.601 Deallocated Read Value: All 0x00 00:11:09.601 Deallocate in Write Zeroes: Not Supported 00:11:09.601 Deallocated Guard Field: 0xFFFF 00:11:09.601 Flush: Supported 00:11:09.601 Reservation: Not Supported 00:11:09.601 Namespace Sharing Capabilities: Private 00:11:09.601 Size (in LBAs): 1310720 (5GiB) 00:11:09.601 Capacity (in LBAs): 1310720 (5GiB) 00:11:09.601 Utilization (in LBAs): 1310720 (5GiB) 00:11:09.601 Thin Provisioning: Not Supported 00:11:09.601 Per-NS Atomic Units: No 00:11:09.601 Maximum Single Source Range Length: 128 00:11:09.601 Maximum Copy Length: 128 00:11:09.601 Maximum Source Range Count: 128 00:11:09.601 NGUID/EUI64 Never Reused: No 00:11:09.601 Namespace Write Protected: No 00:11:09.601 Number of LBA Formats: 8 00:11:09.601 Current LBA Format: LBA Format #04 00:11:09.601 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:09.601 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:09.601 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:09.601 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:09.601 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:09.601 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:09.601 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:09.601 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:09.601 00:11:09.601 NVM Specific Namespace Data 00:11:09.601 =========================== 00:11:09.601 Logical Block Storage Tag Mask: 0 00:11:09.601 Protection Information Capabilities: 00:11:09.601 16b Guard Protection Information Storage Tag Support: No 00:11:09.601 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:09.601 Storage Tag Check Read Support: No 00:11:09.601 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.601 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.601 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.601 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.601 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.601 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.601 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.601 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.602 11:18:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:09.602 11:18:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:11:09.861 ===================================================== 00:11:09.861 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:09.861 ===================================================== 00:11:09.861 Controller Capabilities/Features 00:11:09.861 ================================ 00:11:09.861 Vendor ID: 1b36 00:11:09.861 Subsystem Vendor ID: 1af4 00:11:09.861 Serial Number: 12342 00:11:09.861 Model Number: QEMU NVMe Ctrl 00:11:09.861 Firmware Version: 8.0.0 00:11:09.861 Recommended Arb Burst: 6 00:11:09.861 IEEE OUI Identifier: 00 54 52 00:11:09.861 Multi-path I/O 00:11:09.861 May have multiple subsystem ports: No 00:11:09.861 May have multiple controllers: No 00:11:09.861 Associated with SR-IOV VF: No 00:11:09.861 Max Data Transfer Size: 524288 00:11:09.861 Max Number of Namespaces: 256 00:11:09.861 Max Number of I/O Queues: 64 00:11:09.861 NVMe Specification Version (VS): 1.4 00:11:09.861 NVMe Specification Version (Identify): 1.4 00:11:09.861 Maximum Queue Entries: 2048 00:11:09.861 Contiguous Queues Required: Yes 00:11:09.861 Arbitration Mechanisms Supported 00:11:09.861 Weighted Round Robin: Not Supported 00:11:09.861 Vendor Specific: Not Supported 00:11:09.861 Reset Timeout: 7500 ms 00:11:09.861 Doorbell Stride: 4 bytes 00:11:09.861 NVM Subsystem Reset: Not Supported 00:11:09.861 Command Sets Supported 00:11:09.861 NVM Command Set: Supported 00:11:09.861 Boot Partition: Not Supported 00:11:09.861 Memory Page Size Minimum: 4096 bytes 00:11:09.861 Memory Page Size Maximum: 65536 bytes 00:11:09.861 Persistent Memory Region: Not Supported 00:11:09.861 Optional Asynchronous Events Supported 00:11:09.861 Namespace Attribute Notices: Supported 00:11:09.861 Firmware Activation Notices: Not Supported 00:11:09.861 ANA Change Notices: Not Supported 00:11:09.861 PLE Aggregate Log Change Notices: Not Supported 00:11:09.861 LBA Status Info Alert Notices: Not Supported 00:11:09.861 EGE Aggregate Log Change Notices: Not Supported 00:11:09.861 Normal NVM Subsystem Shutdown event: Not Supported 00:11:09.861 Zone Descriptor Change Notices: Not Supported 00:11:09.861 Discovery Log Change Notices: Not Supported 00:11:09.861 Controller Attributes 00:11:09.861 128-bit Host Identifier: Not Supported 00:11:09.861 Non-Operational Permissive Mode: Not Supported 00:11:09.861 NVM Sets: Not Supported 00:11:09.861 Read Recovery Levels: Not Supported 00:11:09.861 Endurance Groups: Not Supported 00:11:09.861 Predictable Latency Mode: Not Supported 00:11:09.861 Traffic Based Keep ALive: Not Supported 00:11:09.862 Namespace Granularity: Not Supported 00:11:09.862 SQ Associations: Not Supported 00:11:09.862 UUID List: Not Supported 00:11:09.862 Multi-Domain Subsystem: Not Supported 00:11:09.862 Fixed Capacity Management: Not Supported 00:11:09.862 Variable Capacity Management: Not Supported 00:11:09.862 Delete Endurance Group: Not Supported 00:11:09.862 Delete NVM Set: Not Supported 00:11:09.862 Extended LBA Formats Supported: Supported 00:11:09.862 Flexible Data Placement Supported: Not Supported 00:11:09.862 00:11:09.862 Controller Memory Buffer Support 00:11:09.862 ================================ 00:11:09.862 Supported: No 00:11:09.862 00:11:09.862 Persistent Memory Region Support 00:11:09.862 ================================ 00:11:09.862 Supported: No 00:11:09.862 00:11:09.862 Admin Command Set Attributes 00:11:09.862 ============================ 00:11:09.862 Security Send/Receive: Not Supported 00:11:09.862 Format NVM: Supported 00:11:09.862 Firmware Activate/Download: Not Supported 00:11:09.862 Namespace Management: Supported 00:11:09.862 Device Self-Test: Not Supported 00:11:09.862 Directives: Supported 00:11:09.862 NVMe-MI: Not Supported 00:11:09.862 Virtualization Management: Not Supported 00:11:09.862 Doorbell Buffer Config: Supported 00:11:09.862 Get LBA Status Capability: Not Supported 00:11:09.862 Command & Feature Lockdown Capability: Not Supported 00:11:09.862 Abort Command Limit: 4 00:11:09.862 Async Event Request Limit: 4 00:11:09.862 Number of Firmware Slots: N/A 00:11:09.862 Firmware Slot 1 Read-Only: N/A 00:11:09.862 Firmware Activation Without Reset: N/A 00:11:09.862 Multiple Update Detection Support: N/A 00:11:09.862 Firmware Update Granularity: No Information Provided 00:11:09.862 Per-Namespace SMART Log: Yes 00:11:09.862 Asymmetric Namespace Access Log Page: Not Supported 00:11:09.862 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:09.862 Command Effects Log Page: Supported 00:11:09.862 Get Log Page Extended Data: Supported 00:11:09.862 Telemetry Log Pages: Not Supported 00:11:09.862 Persistent Event Log Pages: Not Supported 00:11:09.862 Supported Log Pages Log Page: May Support 00:11:09.862 Commands Supported & Effects Log Page: Not Supported 00:11:09.862 Feature Identifiers & Effects Log Page:May Support 00:11:09.862 NVMe-MI Commands & Effects Log Page: May Support 00:11:09.862 Data Area 4 for Telemetry Log: Not Supported 00:11:09.862 Error Log Page Entries Supported: 1 00:11:09.862 Keep Alive: Not Supported 00:11:09.862 00:11:09.862 NVM Command Set Attributes 00:11:09.862 ========================== 00:11:09.862 Submission Queue Entry Size 00:11:09.862 Max: 64 00:11:09.862 Min: 64 00:11:09.862 Completion Queue Entry Size 00:11:09.862 Max: 16 00:11:09.862 Min: 16 00:11:09.862 Number of Namespaces: 256 00:11:09.862 Compare Command: Supported 00:11:09.862 Write Uncorrectable Command: Not Supported 00:11:09.862 Dataset Management Command: Supported 00:11:09.862 Write Zeroes Command: Supported 00:11:09.862 Set Features Save Field: Supported 00:11:09.862 Reservations: Not Supported 00:11:09.862 Timestamp: Supported 00:11:09.862 Copy: Supported 00:11:09.862 Volatile Write Cache: Present 00:11:09.862 Atomic Write Unit (Normal): 1 00:11:09.862 Atomic Write Unit (PFail): 1 00:11:09.862 Atomic Compare & Write Unit: 1 00:11:09.862 Fused Compare & Write: Not Supported 00:11:09.862 Scatter-Gather List 00:11:09.862 SGL Command Set: Supported 00:11:09.862 SGL Keyed: Not Supported 00:11:09.862 SGL Bit Bucket Descriptor: Not Supported 00:11:09.862 SGL Metadata Pointer: Not Supported 00:11:09.862 Oversized SGL: Not Supported 00:11:09.862 SGL Metadata Address: Not Supported 00:11:09.862 SGL Offset: Not Supported 00:11:09.862 Transport SGL Data Block: Not Supported 00:11:09.862 Replay Protected Memory Block: Not Supported 00:11:09.862 00:11:09.862 Firmware Slot Information 00:11:09.862 ========================= 00:11:09.862 Active slot: 1 00:11:09.862 Slot 1 Firmware Revision: 1.0 00:11:09.862 00:11:09.862 00:11:09.862 Commands Supported and Effects 00:11:09.862 ============================== 00:11:09.862 Admin Commands 00:11:09.862 -------------- 00:11:09.862 Delete I/O Submission Queue (00h): Supported 00:11:09.862 Create I/O Submission Queue (01h): Supported 00:11:09.862 Get Log Page (02h): Supported 00:11:09.862 Delete I/O Completion Queue (04h): Supported 00:11:09.862 Create I/O Completion Queue (05h): Supported 00:11:09.862 Identify (06h): Supported 00:11:09.862 Abort (08h): Supported 00:11:09.862 Set Features (09h): Supported 00:11:09.862 Get Features (0Ah): Supported 00:11:09.862 Asynchronous Event Request (0Ch): Supported 00:11:09.862 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:09.862 Directive Send (19h): Supported 00:11:09.862 Directive Receive (1Ah): Supported 00:11:09.862 Virtualization Management (1Ch): Supported 00:11:09.862 Doorbell Buffer Config (7Ch): Supported 00:11:09.862 Format NVM (80h): Supported LBA-Change 00:11:09.862 I/O Commands 00:11:09.862 ------------ 00:11:09.862 Flush (00h): Supported LBA-Change 00:11:09.862 Write (01h): Supported LBA-Change 00:11:09.862 Read (02h): Supported 00:11:09.862 Compare (05h): Supported 00:11:09.862 Write Zeroes (08h): Supported LBA-Change 00:11:09.862 Dataset Management (09h): Supported LBA-Change 00:11:09.862 Unknown (0Ch): Supported 00:11:09.862 Unknown (12h): Supported 00:11:09.862 Copy (19h): Supported LBA-Change 00:11:09.862 Unknown (1Dh): Supported LBA-Change 00:11:09.862 00:11:09.862 Error Log 00:11:09.862 ========= 00:11:09.862 00:11:09.862 Arbitration 00:11:09.862 =========== 00:11:09.862 Arbitration Burst: no limit 00:11:09.862 00:11:09.862 Power Management 00:11:09.862 ================ 00:11:09.862 Number of Power States: 1 00:11:09.862 Current Power State: Power State #0 00:11:09.862 Power State #0: 00:11:09.862 Max Power: 25.00 W 00:11:09.862 Non-Operational State: Operational 00:11:09.862 Entry Latency: 16 microseconds 00:11:09.862 Exit Latency: 4 microseconds 00:11:09.862 Relative Read Throughput: 0 00:11:09.862 Relative Read Latency: 0 00:11:09.862 Relative Write Throughput: 0 00:11:09.862 Relative Write Latency: 0 00:11:09.862 Idle Power: Not Reported 00:11:09.862 Active Power: Not Reported 00:11:09.862 Non-Operational Permissive Mode: Not Supported 00:11:09.862 00:11:09.862 Health Information 00:11:09.862 ================== 00:11:09.862 Critical Warnings: 00:11:09.862 Available Spare Space: OK 00:11:09.862 Temperature: OK 00:11:09.862 Device Reliability: OK 00:11:09.862 Read Only: No 00:11:09.862 Volatile Memory Backup: OK 00:11:09.862 Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.862 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:09.862 Available Spare: 0% 00:11:09.862 Available Spare Threshold: 0% 00:11:09.862 Life Percentage Used: 0% 00:11:09.862 Data Units Read: 2387 00:11:09.862 Data Units Written: 2174 00:11:09.862 Host Read Commands: 110747 00:11:09.862 Host Write Commands: 109016 00:11:09.862 Controller Busy Time: 0 minutes 00:11:09.862 Power Cycles: 0 00:11:09.862 Power On Hours: 0 hours 00:11:09.862 Unsafe Shutdowns: 0 00:11:09.862 Unrecoverable Media Errors: 0 00:11:09.862 Lifetime Error Log Entries: 0 00:11:09.862 Warning Temperature Time: 0 minutes 00:11:09.862 Critical Temperature Time: 0 minutes 00:11:09.862 00:11:09.862 Number of Queues 00:11:09.862 ================ 00:11:09.862 Number of I/O Submission Queues: 64 00:11:09.862 Number of I/O Completion Queues: 64 00:11:09.862 00:11:09.862 ZNS Specific Controller Data 00:11:09.862 ============================ 00:11:09.862 Zone Append Size Limit: 0 00:11:09.862 00:11:09.862 00:11:09.862 Active Namespaces 00:11:09.862 ================= 00:11:09.862 Namespace ID:1 00:11:09.862 Error Recovery Timeout: Unlimited 00:11:09.862 Command Set Identifier: NVM (00h) 00:11:09.862 Deallocate: Supported 00:11:09.862 Deallocated/Unwritten Error: Supported 00:11:09.862 Deallocated Read Value: All 0x00 00:11:09.862 Deallocate in Write Zeroes: Not Supported 00:11:09.862 Deallocated Guard Field: 0xFFFF 00:11:09.862 Flush: Supported 00:11:09.862 Reservation: Not Supported 00:11:09.862 Namespace Sharing Capabilities: Private 00:11:09.862 Size (in LBAs): 1048576 (4GiB) 00:11:09.862 Capacity (in LBAs): 1048576 (4GiB) 00:11:09.862 Utilization (in LBAs): 1048576 (4GiB) 00:11:09.862 Thin Provisioning: Not Supported 00:11:09.862 Per-NS Atomic Units: No 00:11:09.862 Maximum Single Source Range Length: 128 00:11:09.862 Maximum Copy Length: 128 00:11:09.862 Maximum Source Range Count: 128 00:11:09.862 NGUID/EUI64 Never Reused: No 00:11:09.862 Namespace Write Protected: No 00:11:09.862 Number of LBA Formats: 8 00:11:09.862 Current LBA Format: LBA Format #04 00:11:09.862 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:09.862 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:09.863 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:09.863 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:09.863 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:09.863 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:09.863 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:09.863 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:09.863 00:11:09.863 NVM Specific Namespace Data 00:11:09.863 =========================== 00:11:09.863 Logical Block Storage Tag Mask: 0 00:11:09.863 Protection Information Capabilities: 00:11:09.863 16b Guard Protection Information Storage Tag Support: No 00:11:09.863 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:09.863 Storage Tag Check Read Support: No 00:11:09.863 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Namespace ID:2 00:11:09.863 Error Recovery Timeout: Unlimited 00:11:09.863 Command Set Identifier: NVM (00h) 00:11:09.863 Deallocate: Supported 00:11:09.863 Deallocated/Unwritten Error: Supported 00:11:09.863 Deallocated Read Value: All 0x00 00:11:09.863 Deallocate in Write Zeroes: Not Supported 00:11:09.863 Deallocated Guard Field: 0xFFFF 00:11:09.863 Flush: Supported 00:11:09.863 Reservation: Not Supported 00:11:09.863 Namespace Sharing Capabilities: Private 00:11:09.863 Size (in LBAs): 1048576 (4GiB) 00:11:09.863 Capacity (in LBAs): 1048576 (4GiB) 00:11:09.863 Utilization (in LBAs): 1048576 (4GiB) 00:11:09.863 Thin Provisioning: Not Supported 00:11:09.863 Per-NS Atomic Units: No 00:11:09.863 Maximum Single Source Range Length: 128 00:11:09.863 Maximum Copy Length: 128 00:11:09.863 Maximum Source Range Count: 128 00:11:09.863 NGUID/EUI64 Never Reused: No 00:11:09.863 Namespace Write Protected: No 00:11:09.863 Number of LBA Formats: 8 00:11:09.863 Current LBA Format: LBA Format #04 00:11:09.863 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:09.863 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:09.863 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:09.863 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:09.863 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:09.863 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:09.863 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:09.863 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:09.863 00:11:09.863 NVM Specific Namespace Data 00:11:09.863 =========================== 00:11:09.863 Logical Block Storage Tag Mask: 0 00:11:09.863 Protection Information Capabilities: 00:11:09.863 16b Guard Protection Information Storage Tag Support: No 00:11:09.863 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:09.863 Storage Tag Check Read Support: No 00:11:09.863 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:09.863 Namespace ID:3 00:11:09.863 Error Recovery Timeout: Unlimited 00:11:09.863 Command Set Identifier: NVM (00h) 00:11:09.863 Deallocate: Supported 00:11:09.863 Deallocated/Unwritten Error: Supported 00:11:09.863 Deallocated Read Value: All 0x00 00:11:09.863 Deallocate in Write Zeroes: Not Supported 00:11:09.863 Deallocated Guard Field: 0xFFFF 00:11:09.863 Flush: Supported 00:11:09.863 Reservation: Not Supported 00:11:09.863 Namespace Sharing Capabilities: Private 00:11:09.863 Size (in LBAs): 1048576 (4GiB) 00:11:09.863 Capacity (in LBAs): 1048576 (4GiB) 00:11:09.863 Utilization (in LBAs): 1048576 (4GiB) 00:11:09.863 Thin Provisioning: Not Supported 00:11:09.863 Per-NS Atomic Units: No 00:11:09.863 Maximum Single Source Range Length: 128 00:11:09.863 Maximum Copy Length: 128 00:11:09.863 Maximum Source Range Count: 128 00:11:09.863 NGUID/EUI64 Never Reused: No 00:11:09.863 Namespace Write Protected: No 00:11:09.863 Number of LBA Formats: 8 00:11:09.863 Current LBA Format: LBA Format #04 00:11:09.863 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:09.863 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:09.863 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:09.863 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:09.863 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:09.863 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:09.863 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:09.863 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:09.863 00:11:09.863 NVM Specific Namespace Data 00:11:09.863 =========================== 00:11:09.863 Logical Block Storage Tag Mask: 0 00:11:09.863 Protection Information Capabilities: 00:11:09.863 16b Guard Protection Information Storage Tag Support: No 00:11:09.863 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:10.122 Storage Tag Check Read Support: No 00:11:10.122 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.122 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.122 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.122 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.122 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.122 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.122 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.122 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.122 11:18:37 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:10.122 11:18:37 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:11:10.382 ===================================================== 00:11:10.382 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:10.382 ===================================================== 00:11:10.382 Controller Capabilities/Features 00:11:10.382 ================================ 00:11:10.382 Vendor ID: 1b36 00:11:10.382 Subsystem Vendor ID: 1af4 00:11:10.382 Serial Number: 12343 00:11:10.382 Model Number: QEMU NVMe Ctrl 00:11:10.382 Firmware Version: 8.0.0 00:11:10.382 Recommended Arb Burst: 6 00:11:10.382 IEEE OUI Identifier: 00 54 52 00:11:10.382 Multi-path I/O 00:11:10.382 May have multiple subsystem ports: No 00:11:10.382 May have multiple controllers: Yes 00:11:10.382 Associated with SR-IOV VF: No 00:11:10.382 Max Data Transfer Size: 524288 00:11:10.382 Max Number of Namespaces: 256 00:11:10.382 Max Number of I/O Queues: 64 00:11:10.382 NVMe Specification Version (VS): 1.4 00:11:10.382 NVMe Specification Version (Identify): 1.4 00:11:10.382 Maximum Queue Entries: 2048 00:11:10.382 Contiguous Queues Required: Yes 00:11:10.382 Arbitration Mechanisms Supported 00:11:10.382 Weighted Round Robin: Not Supported 00:11:10.382 Vendor Specific: Not Supported 00:11:10.382 Reset Timeout: 7500 ms 00:11:10.382 Doorbell Stride: 4 bytes 00:11:10.382 NVM Subsystem Reset: Not Supported 00:11:10.382 Command Sets Supported 00:11:10.382 NVM Command Set: Supported 00:11:10.382 Boot Partition: Not Supported 00:11:10.382 Memory Page Size Minimum: 4096 bytes 00:11:10.382 Memory Page Size Maximum: 65536 bytes 00:11:10.382 Persistent Memory Region: Not Supported 00:11:10.382 Optional Asynchronous Events Supported 00:11:10.382 Namespace Attribute Notices: Supported 00:11:10.382 Firmware Activation Notices: Not Supported 00:11:10.382 ANA Change Notices: Not Supported 00:11:10.382 PLE Aggregate Log Change Notices: Not Supported 00:11:10.382 LBA Status Info Alert Notices: Not Supported 00:11:10.382 EGE Aggregate Log Change Notices: Not Supported 00:11:10.382 Normal NVM Subsystem Shutdown event: Not Supported 00:11:10.382 Zone Descriptor Change Notices: Not Supported 00:11:10.382 Discovery Log Change Notices: Not Supported 00:11:10.382 Controller Attributes 00:11:10.382 128-bit Host Identifier: Not Supported 00:11:10.382 Non-Operational Permissive Mode: Not Supported 00:11:10.382 NVM Sets: Not Supported 00:11:10.382 Read Recovery Levels: Not Supported 00:11:10.382 Endurance Groups: Supported 00:11:10.382 Predictable Latency Mode: Not Supported 00:11:10.382 Traffic Based Keep ALive: Not Supported 00:11:10.382 Namespace Granularity: Not Supported 00:11:10.382 SQ Associations: Not Supported 00:11:10.382 UUID List: Not Supported 00:11:10.382 Multi-Domain Subsystem: Not Supported 00:11:10.382 Fixed Capacity Management: Not Supported 00:11:10.382 Variable Capacity Management: Not Supported 00:11:10.382 Delete Endurance Group: Not Supported 00:11:10.382 Delete NVM Set: Not Supported 00:11:10.382 Extended LBA Formats Supported: Supported 00:11:10.382 Flexible Data Placement Supported: Supported 00:11:10.382 00:11:10.382 Controller Memory Buffer Support 00:11:10.382 ================================ 00:11:10.382 Supported: No 00:11:10.382 00:11:10.382 Persistent Memory Region Support 00:11:10.382 ================================ 00:11:10.382 Supported: No 00:11:10.382 00:11:10.382 Admin Command Set Attributes 00:11:10.382 ============================ 00:11:10.382 Security Send/Receive: Not Supported 00:11:10.382 Format NVM: Supported 00:11:10.382 Firmware Activate/Download: Not Supported 00:11:10.382 Namespace Management: Supported 00:11:10.382 Device Self-Test: Not Supported 00:11:10.382 Directives: Supported 00:11:10.382 NVMe-MI: Not Supported 00:11:10.382 Virtualization Management: Not Supported 00:11:10.382 Doorbell Buffer Config: Supported 00:11:10.382 Get LBA Status Capability: Not Supported 00:11:10.382 Command & Feature Lockdown Capability: Not Supported 00:11:10.382 Abort Command Limit: 4 00:11:10.382 Async Event Request Limit: 4 00:11:10.382 Number of Firmware Slots: N/A 00:11:10.382 Firmware Slot 1 Read-Only: N/A 00:11:10.382 Firmware Activation Without Reset: N/A 00:11:10.382 Multiple Update Detection Support: N/A 00:11:10.382 Firmware Update Granularity: No Information Provided 00:11:10.382 Per-Namespace SMART Log: Yes 00:11:10.382 Asymmetric Namespace Access Log Page: Not Supported 00:11:10.382 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:10.382 Command Effects Log Page: Supported 00:11:10.382 Get Log Page Extended Data: Supported 00:11:10.382 Telemetry Log Pages: Not Supported 00:11:10.382 Persistent Event Log Pages: Not Supported 00:11:10.382 Supported Log Pages Log Page: May Support 00:11:10.382 Commands Supported & Effects Log Page: Not Supported 00:11:10.382 Feature Identifiers & Effects Log Page:May Support 00:11:10.382 NVMe-MI Commands & Effects Log Page: May Support 00:11:10.382 Data Area 4 for Telemetry Log: Not Supported 00:11:10.382 Error Log Page Entries Supported: 1 00:11:10.382 Keep Alive: Not Supported 00:11:10.382 00:11:10.382 NVM Command Set Attributes 00:11:10.382 ========================== 00:11:10.382 Submission Queue Entry Size 00:11:10.382 Max: 64 00:11:10.382 Min: 64 00:11:10.382 Completion Queue Entry Size 00:11:10.382 Max: 16 00:11:10.382 Min: 16 00:11:10.382 Number of Namespaces: 256 00:11:10.382 Compare Command: Supported 00:11:10.382 Write Uncorrectable Command: Not Supported 00:11:10.382 Dataset Management Command: Supported 00:11:10.382 Write Zeroes Command: Supported 00:11:10.382 Set Features Save Field: Supported 00:11:10.382 Reservations: Not Supported 00:11:10.382 Timestamp: Supported 00:11:10.382 Copy: Supported 00:11:10.382 Volatile Write Cache: Present 00:11:10.382 Atomic Write Unit (Normal): 1 00:11:10.382 Atomic Write Unit (PFail): 1 00:11:10.382 Atomic Compare & Write Unit: 1 00:11:10.382 Fused Compare & Write: Not Supported 00:11:10.382 Scatter-Gather List 00:11:10.382 SGL Command Set: Supported 00:11:10.382 SGL Keyed: Not Supported 00:11:10.382 SGL Bit Bucket Descriptor: Not Supported 00:11:10.382 SGL Metadata Pointer: Not Supported 00:11:10.382 Oversized SGL: Not Supported 00:11:10.382 SGL Metadata Address: Not Supported 00:11:10.382 SGL Offset: Not Supported 00:11:10.382 Transport SGL Data Block: Not Supported 00:11:10.382 Replay Protected Memory Block: Not Supported 00:11:10.382 00:11:10.382 Firmware Slot Information 00:11:10.382 ========================= 00:11:10.382 Active slot: 1 00:11:10.382 Slot 1 Firmware Revision: 1.0 00:11:10.382 00:11:10.382 00:11:10.382 Commands Supported and Effects 00:11:10.382 ============================== 00:11:10.382 Admin Commands 00:11:10.382 -------------- 00:11:10.382 Delete I/O Submission Queue (00h): Supported 00:11:10.382 Create I/O Submission Queue (01h): Supported 00:11:10.382 Get Log Page (02h): Supported 00:11:10.382 Delete I/O Completion Queue (04h): Supported 00:11:10.382 Create I/O Completion Queue (05h): Supported 00:11:10.382 Identify (06h): Supported 00:11:10.382 Abort (08h): Supported 00:11:10.382 Set Features (09h): Supported 00:11:10.382 Get Features (0Ah): Supported 00:11:10.382 Asynchronous Event Request (0Ch): Supported 00:11:10.382 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:10.382 Directive Send (19h): Supported 00:11:10.382 Directive Receive (1Ah): Supported 00:11:10.382 Virtualization Management (1Ch): Supported 00:11:10.382 Doorbell Buffer Config (7Ch): Supported 00:11:10.382 Format NVM (80h): Supported LBA-Change 00:11:10.382 I/O Commands 00:11:10.382 ------------ 00:11:10.382 Flush (00h): Supported LBA-Change 00:11:10.382 Write (01h): Supported LBA-Change 00:11:10.382 Read (02h): Supported 00:11:10.382 Compare (05h): Supported 00:11:10.382 Write Zeroes (08h): Supported LBA-Change 00:11:10.383 Dataset Management (09h): Supported LBA-Change 00:11:10.383 Unknown (0Ch): Supported 00:11:10.383 Unknown (12h): Supported 00:11:10.383 Copy (19h): Supported LBA-Change 00:11:10.383 Unknown (1Dh): Supported LBA-Change 00:11:10.383 00:11:10.383 Error Log 00:11:10.383 ========= 00:11:10.383 00:11:10.383 Arbitration 00:11:10.383 =========== 00:11:10.383 Arbitration Burst: no limit 00:11:10.383 00:11:10.383 Power Management 00:11:10.383 ================ 00:11:10.383 Number of Power States: 1 00:11:10.383 Current Power State: Power State #0 00:11:10.383 Power State #0: 00:11:10.383 Max Power: 25.00 W 00:11:10.383 Non-Operational State: Operational 00:11:10.383 Entry Latency: 16 microseconds 00:11:10.383 Exit Latency: 4 microseconds 00:11:10.383 Relative Read Throughput: 0 00:11:10.383 Relative Read Latency: 0 00:11:10.383 Relative Write Throughput: 0 00:11:10.383 Relative Write Latency: 0 00:11:10.383 Idle Power: Not Reported 00:11:10.383 Active Power: Not Reported 00:11:10.383 Non-Operational Permissive Mode: Not Supported 00:11:10.383 00:11:10.383 Health Information 00:11:10.383 ================== 00:11:10.383 Critical Warnings: 00:11:10.383 Available Spare Space: OK 00:11:10.383 Temperature: OK 00:11:10.383 Device Reliability: OK 00:11:10.383 Read Only: No 00:11:10.383 Volatile Memory Backup: OK 00:11:10.383 Current Temperature: 323 Kelvin (50 Celsius) 00:11:10.383 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:10.383 Available Spare: 0% 00:11:10.383 Available Spare Threshold: 0% 00:11:10.383 Life Percentage Used: 0% 00:11:10.383 Data Units Read: 865 00:11:10.383 Data Units Written: 794 00:11:10.383 Host Read Commands: 37497 00:11:10.383 Host Write Commands: 36920 00:11:10.383 Controller Busy Time: 0 minutes 00:11:10.383 Power Cycles: 0 00:11:10.383 Power On Hours: 0 hours 00:11:10.383 Unsafe Shutdowns: 0 00:11:10.383 Unrecoverable Media Errors: 0 00:11:10.383 Lifetime Error Log Entries: 0 00:11:10.383 Warning Temperature Time: 0 minutes 00:11:10.383 Critical Temperature Time: 0 minutes 00:11:10.383 00:11:10.383 Number of Queues 00:11:10.383 ================ 00:11:10.383 Number of I/O Submission Queues: 64 00:11:10.383 Number of I/O Completion Queues: 64 00:11:10.383 00:11:10.383 ZNS Specific Controller Data 00:11:10.383 ============================ 00:11:10.383 Zone Append Size Limit: 0 00:11:10.383 00:11:10.383 00:11:10.383 Active Namespaces 00:11:10.383 ================= 00:11:10.383 Namespace ID:1 00:11:10.383 Error Recovery Timeout: Unlimited 00:11:10.383 Command Set Identifier: NVM (00h) 00:11:10.383 Deallocate: Supported 00:11:10.383 Deallocated/Unwritten Error: Supported 00:11:10.383 Deallocated Read Value: All 0x00 00:11:10.383 Deallocate in Write Zeroes: Not Supported 00:11:10.383 Deallocated Guard Field: 0xFFFF 00:11:10.383 Flush: Supported 00:11:10.383 Reservation: Not Supported 00:11:10.383 Namespace Sharing Capabilities: Multiple Controllers 00:11:10.383 Size (in LBAs): 262144 (1GiB) 00:11:10.383 Capacity (in LBAs): 262144 (1GiB) 00:11:10.383 Utilization (in LBAs): 262144 (1GiB) 00:11:10.383 Thin Provisioning: Not Supported 00:11:10.383 Per-NS Atomic Units: No 00:11:10.383 Maximum Single Source Range Length: 128 00:11:10.383 Maximum Copy Length: 128 00:11:10.383 Maximum Source Range Count: 128 00:11:10.383 NGUID/EUI64 Never Reused: No 00:11:10.383 Namespace Write Protected: No 00:11:10.383 Endurance group ID: 1 00:11:10.383 Number of LBA Formats: 8 00:11:10.383 Current LBA Format: LBA Format #04 00:11:10.383 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:10.383 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:10.383 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:10.383 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:10.383 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:10.383 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:10.383 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:10.383 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:10.383 00:11:10.383 Get Feature FDP: 00:11:10.383 ================ 00:11:10.383 Enabled: Yes 00:11:10.383 FDP configuration index: 0 00:11:10.383 00:11:10.383 FDP configurations log page 00:11:10.383 =========================== 00:11:10.383 Number of FDP configurations: 1 00:11:10.383 Version: 0 00:11:10.383 Size: 112 00:11:10.383 FDP Configuration Descriptor: 0 00:11:10.383 Descriptor Size: 96 00:11:10.383 Reclaim Group Identifier format: 2 00:11:10.383 FDP Volatile Write Cache: Not Present 00:11:10.383 FDP Configuration: Valid 00:11:10.383 Vendor Specific Size: 0 00:11:10.383 Number of Reclaim Groups: 2 00:11:10.383 Number of Recalim Unit Handles: 8 00:11:10.383 Max Placement Identifiers: 128 00:11:10.383 Number of Namespaces Suppprted: 256 00:11:10.383 Reclaim unit Nominal Size: 6000000 bytes 00:11:10.383 Estimated Reclaim Unit Time Limit: Not Reported 00:11:10.383 RUH Desc #000: RUH Type: Initially Isolated 00:11:10.383 RUH Desc #001: RUH Type: Initially Isolated 00:11:10.383 RUH Desc #002: RUH Type: Initially Isolated 00:11:10.383 RUH Desc #003: RUH Type: Initially Isolated 00:11:10.383 RUH Desc #004: RUH Type: Initially Isolated 00:11:10.383 RUH Desc #005: RUH Type: Initially Isolated 00:11:10.383 RUH Desc #006: RUH Type: Initially Isolated 00:11:10.383 RUH Desc #007: RUH Type: Initially Isolated 00:11:10.383 00:11:10.383 FDP reclaim unit handle usage log page 00:11:10.383 ====================================== 00:11:10.383 Number of Reclaim Unit Handles: 8 00:11:10.383 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:10.383 RUH Usage Desc #001: RUH Attributes: Unused 00:11:10.383 RUH Usage Desc #002: RUH Attributes: Unused 00:11:10.383 RUH Usage Desc #003: RUH Attributes: Unused 00:11:10.383 RUH Usage Desc #004: RUH Attributes: Unused 00:11:10.383 RUH Usage Desc #005: RUH Attributes: Unused 00:11:10.383 RUH Usage Desc #006: RUH Attributes: Unused 00:11:10.383 RUH Usage Desc #007: RUH Attributes: Unused 00:11:10.383 00:11:10.383 FDP statistics log page 00:11:10.383 ======================= 00:11:10.383 Host bytes with metadata written: 512860160 00:11:10.383 Media bytes with metadata written: 512917504 00:11:10.383 Media bytes erased: 0 00:11:10.383 00:11:10.383 FDP events log page 00:11:10.383 =================== 00:11:10.383 Number of FDP events: 0 00:11:10.383 00:11:10.383 NVM Specific Namespace Data 00:11:10.383 =========================== 00:11:10.383 Logical Block Storage Tag Mask: 0 00:11:10.383 Protection Information Capabilities: 00:11:10.383 16b Guard Protection Information Storage Tag Support: No 00:11:10.383 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:10.383 Storage Tag Check Read Support: No 00:11:10.383 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.383 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.383 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.383 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.383 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.383 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.383 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.383 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:10.383 ************************************ 00:11:10.383 END TEST nvme_identify 00:11:10.383 ************************************ 00:11:10.383 00:11:10.383 real 0m1.740s 00:11:10.383 user 0m0.638s 00:11:10.383 sys 0m0.889s 00:11:10.383 11:18:37 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.383 11:18:37 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:11:10.383 11:18:37 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:10.383 11:18:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.383 11:18:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.383 11:18:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:10.383 ************************************ 00:11:10.383 START TEST nvme_perf 00:11:10.383 ************************************ 00:11:10.383 11:18:37 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:11:10.383 11:18:37 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:11.758 Initializing NVMe Controllers 00:11:11.758 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:11.758 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:11.758 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:11.758 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:11.758 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:11.758 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:11.758 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:11.758 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:11.758 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:11.758 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:11.758 Initialization complete. Launching workers. 00:11:11.758 ======================================================== 00:11:11.758 Latency(us) 00:11:11.758 Device Information : IOPS MiB/s Average min max 00:11:11.758 PCIE (0000:00:10.0) NSID 1 from core 0: 13787.93 161.58 9301.53 7969.23 51835.51 00:11:11.758 PCIE (0000:00:11.0) NSID 1 from core 0: 13787.93 161.58 9286.75 8034.32 49755.36 00:11:11.758 PCIE (0000:00:13.0) NSID 1 from core 0: 13787.93 161.58 9271.43 8025.03 48467.34 00:11:11.758 PCIE (0000:00:12.0) NSID 1 from core 0: 13787.93 161.58 9254.50 8094.32 46477.74 00:11:11.758 PCIE (0000:00:12.0) NSID 2 from core 0: 13787.93 161.58 9237.66 8080.37 44366.35 00:11:11.758 PCIE (0000:00:12.0) NSID 3 from core 0: 13851.76 162.33 9178.08 8070.03 37224.52 00:11:11.758 ======================================================== 00:11:11.758 Total : 82791.42 970.21 9254.93 7969.23 51835.51 00:11:11.758 00:11:11.758 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:11.758 ================================================================================= 00:11:11.758 1.00000% : 8211.740us 00:11:11.758 10.00000% : 8474.937us 00:11:11.758 25.00000% : 8685.494us 00:11:11.758 50.00000% : 8948.691us 00:11:11.758 75.00000% : 9211.888us 00:11:11.758 90.00000% : 9475.084us 00:11:11.758 95.00000% : 9685.642us 00:11:11.758 98.00000% : 10317.314us 00:11:11.758 99.00000% : 10948.986us 00:11:11.758 99.50000% : 44848.733us 00:11:11.758 99.90000% : 51586.570us 00:11:11.758 99.99000% : 51797.128us 00:11:11.758 99.99900% : 52007.685us 00:11:11.758 99.99990% : 52007.685us 00:11:11.758 99.99999% : 52007.685us 00:11:11.758 00:11:11.759 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:11.759 ================================================================================= 00:11:11.759 1.00000% : 8264.379us 00:11:11.759 10.00000% : 8527.576us 00:11:11.759 25.00000% : 8685.494us 00:11:11.759 50.00000% : 8948.691us 00:11:11.759 75.00000% : 9211.888us 00:11:11.759 90.00000% : 9422.445us 00:11:11.759 95.00000% : 9633.002us 00:11:11.759 98.00000% : 10212.035us 00:11:11.759 99.00000% : 11212.183us 00:11:11.759 99.50000% : 43164.273us 00:11:11.759 99.90000% : 49480.996us 00:11:11.759 99.99000% : 49902.111us 00:11:11.759 99.99900% : 49902.111us 00:11:11.759 99.99990% : 49902.111us 00:11:11.759 99.99999% : 49902.111us 00:11:11.759 00:11:11.759 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:11.759 ================================================================================= 00:11:11.759 1.00000% : 8317.018us 00:11:11.759 10.00000% : 8527.576us 00:11:11.759 25.00000% : 8738.133us 00:11:11.759 50.00000% : 8948.691us 00:11:11.759 75.00000% : 9211.888us 00:11:11.759 90.00000% : 9422.445us 00:11:11.759 95.00000% : 9580.363us 00:11:11.759 98.00000% : 10212.035us 00:11:11.759 99.00000% : 11159.544us 00:11:11.759 99.50000% : 41900.929us 00:11:11.759 99.90000% : 48217.651us 00:11:11.759 99.99000% : 48638.766us 00:11:11.759 99.99900% : 48638.766us 00:11:11.759 99.99990% : 48638.766us 00:11:11.759 99.99999% : 48638.766us 00:11:11.759 00:11:11.759 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:11.759 ================================================================================= 00:11:11.759 1.00000% : 8317.018us 00:11:11.759 10.00000% : 8527.576us 00:11:11.759 25.00000% : 8685.494us 00:11:11.759 50.00000% : 8948.691us 00:11:11.759 75.00000% : 9211.888us 00:11:11.759 90.00000% : 9422.445us 00:11:11.759 95.00000% : 9580.363us 00:11:11.759 98.00000% : 10317.314us 00:11:11.759 99.00000% : 11528.019us 00:11:11.759 99.50000% : 39795.354us 00:11:11.759 99.90000% : 46112.077us 00:11:11.759 99.99000% : 46533.192us 00:11:11.759 99.99900% : 46533.192us 00:11:11.759 99.99990% : 46533.192us 00:11:11.759 99.99999% : 46533.192us 00:11:11.759 00:11:11.759 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:11.759 ================================================================================= 00:11:11.759 1.00000% : 8317.018us 00:11:11.759 10.00000% : 8527.576us 00:11:11.759 25.00000% : 8685.494us 00:11:11.759 50.00000% : 8948.691us 00:11:11.759 75.00000% : 9211.888us 00:11:11.759 90.00000% : 9422.445us 00:11:11.759 95.00000% : 9633.002us 00:11:11.759 98.00000% : 10369.953us 00:11:11.759 99.00000% : 11843.855us 00:11:11.759 99.50000% : 37900.337us 00:11:11.759 99.90000% : 44217.060us 00:11:11.759 99.99000% : 44427.618us 00:11:11.759 99.99900% : 44427.618us 00:11:11.759 99.99990% : 44427.618us 00:11:11.759 99.99999% : 44427.618us 00:11:11.759 00:11:11.759 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:11.759 ================================================================================= 00:11:11.759 1.00000% : 8264.379us 00:11:11.759 10.00000% : 8527.576us 00:11:11.759 25.00000% : 8685.494us 00:11:11.759 50.00000% : 8948.691us 00:11:11.759 75.00000% : 9211.888us 00:11:11.759 90.00000% : 9422.445us 00:11:11.759 95.00000% : 9633.002us 00:11:11.759 98.00000% : 10422.593us 00:11:11.759 99.00000% : 12264.970us 00:11:11.759 99.50000% : 30530.827us 00:11:11.759 99.90000% : 37058.108us 00:11:11.759 99.99000% : 37268.665us 00:11:11.759 99.99900% : 37268.665us 00:11:11.759 99.99990% : 37268.665us 00:11:11.759 99.99999% : 37268.665us 00:11:11.759 00:11:11.759 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:11.759 ============================================================================== 00:11:11.759 Range in us Cumulative IO count 00:11:11.759 7948.543 - 8001.182: 0.0506% ( 7) 00:11:11.759 8001.182 - 8053.822: 0.1157% ( 9) 00:11:11.759 8053.822 - 8106.461: 0.3834% ( 37) 00:11:11.759 8106.461 - 8159.100: 0.8536% ( 65) 00:11:11.759 8159.100 - 8211.740: 1.6204% ( 106) 00:11:11.759 8211.740 - 8264.379: 2.7633% ( 158) 00:11:11.759 8264.379 - 8317.018: 4.3837% ( 224) 00:11:11.759 8317.018 - 8369.658: 6.4959% ( 292) 00:11:11.759 8369.658 - 8422.297: 9.0495% ( 353) 00:11:11.759 8422.297 - 8474.937: 12.2468% ( 442) 00:11:11.759 8474.937 - 8527.576: 15.9071% ( 506) 00:11:11.759 8527.576 - 8580.215: 19.4734% ( 493) 00:11:11.759 8580.215 - 8632.855: 23.6617% ( 579) 00:11:11.759 8632.855 - 8685.494: 27.9803% ( 597) 00:11:11.759 8685.494 - 8738.133: 32.5014% ( 625) 00:11:11.759 8738.133 - 8790.773: 37.4060% ( 678) 00:11:11.759 8790.773 - 8843.412: 42.1875% ( 661) 00:11:11.759 8843.412 - 8896.051: 46.8533% ( 645) 00:11:11.759 8896.051 - 8948.691: 51.9097% ( 699) 00:11:11.759 8948.691 - 9001.330: 56.7491% ( 669) 00:11:11.759 9001.330 - 9053.969: 61.6898% ( 683) 00:11:11.759 9053.969 - 9106.609: 66.4786% ( 662) 00:11:11.759 9106.609 - 9159.248: 71.0359% ( 630) 00:11:11.759 9159.248 - 9211.888: 75.3255% ( 593) 00:11:11.759 9211.888 - 9264.527: 79.3547% ( 557) 00:11:11.759 9264.527 - 9317.166: 82.9572% ( 498) 00:11:11.759 9317.166 - 9369.806: 86.0677% ( 430) 00:11:11.759 9369.806 - 9422.445: 88.6140% ( 352) 00:11:11.759 9422.445 - 9475.084: 90.6322% ( 279) 00:11:11.759 9475.084 - 9527.724: 92.2815% ( 228) 00:11:11.759 9527.724 - 9580.363: 93.6343% ( 187) 00:11:11.759 9580.363 - 9633.002: 94.6542% ( 141) 00:11:11.759 9633.002 - 9685.642: 95.4644% ( 112) 00:11:11.759 9685.642 - 9738.281: 96.0720% ( 84) 00:11:11.759 9738.281 - 9790.920: 96.4844% ( 57) 00:11:11.759 9790.920 - 9843.560: 96.7954% ( 43) 00:11:11.759 9843.560 - 9896.199: 97.0703% ( 38) 00:11:11.759 9896.199 - 9948.839: 97.3090% ( 33) 00:11:11.759 9948.839 - 10001.478: 97.4103% ( 14) 00:11:11.759 10001.478 - 10054.117: 97.5477% ( 19) 00:11:11.759 10054.117 - 10106.757: 97.6852% ( 19) 00:11:11.759 10106.757 - 10159.396: 97.7648% ( 11) 00:11:11.759 10159.396 - 10212.035: 97.8805% ( 16) 00:11:11.759 10212.035 - 10264.675: 97.9818% ( 14) 00:11:11.759 10264.675 - 10317.314: 98.0758% ( 13) 00:11:11.759 10317.314 - 10369.953: 98.1481% ( 10) 00:11:11.759 10369.953 - 10422.593: 98.2350% ( 12) 00:11:11.759 10422.593 - 10475.232: 98.3218% ( 12) 00:11:11.759 10475.232 - 10527.871: 98.4086% ( 12) 00:11:11.759 10527.871 - 10580.511: 98.4954% ( 12) 00:11:11.759 10580.511 - 10633.150: 98.6039% ( 15) 00:11:11.759 10633.150 - 10685.790: 98.7052% ( 14) 00:11:11.759 10685.790 - 10738.429: 98.8064% ( 14) 00:11:11.759 10738.429 - 10791.068: 98.8860% ( 11) 00:11:11.759 10791.068 - 10843.708: 98.9583% ( 10) 00:11:11.759 10843.708 - 10896.347: 98.9873% ( 4) 00:11:11.759 10896.347 - 10948.986: 99.0234% ( 5) 00:11:11.759 10948.986 - 11001.626: 99.0379% ( 2) 00:11:11.759 11001.626 - 11054.265: 99.0524% ( 2) 00:11:11.759 11054.265 - 11106.904: 99.0668% ( 2) 00:11:11.759 11106.904 - 11159.544: 99.0741% ( 1) 00:11:11.759 43164.273 - 43374.831: 99.1319% ( 8) 00:11:11.759 43374.831 - 43585.388: 99.1753% ( 6) 00:11:11.759 43585.388 - 43795.945: 99.2260% ( 7) 00:11:11.759 43795.945 - 44006.503: 99.2766% ( 7) 00:11:11.759 44006.503 - 44217.060: 99.3345% ( 8) 00:11:11.759 44217.060 - 44427.618: 99.3924% ( 8) 00:11:11.759 44427.618 - 44638.175: 99.4430% ( 7) 00:11:11.759 44638.175 - 44848.733: 99.5009% ( 8) 00:11:11.759 44848.733 - 45059.290: 99.5370% ( 5) 00:11:11.759 49902.111 - 50112.668: 99.5804% ( 6) 00:11:11.759 50112.668 - 50323.226: 99.6311% ( 7) 00:11:11.759 50323.226 - 50533.783: 99.6817% ( 7) 00:11:11.759 50533.783 - 50744.341: 99.7323% ( 7) 00:11:11.759 50744.341 - 50954.898: 99.7830% ( 7) 00:11:11.759 50954.898 - 51165.455: 99.8336% ( 7) 00:11:11.759 51165.455 - 51376.013: 99.8843% ( 7) 00:11:11.759 51376.013 - 51586.570: 99.9349% ( 7) 00:11:11.759 51586.570 - 51797.128: 99.9928% ( 8) 00:11:11.759 51797.128 - 52007.685: 100.0000% ( 1) 00:11:11.759 00:11:11.759 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:11.759 ============================================================================== 00:11:11.759 Range in us Cumulative IO count 00:11:11.759 8001.182 - 8053.822: 0.0434% ( 6) 00:11:11.759 8053.822 - 8106.461: 0.0868% ( 6) 00:11:11.759 8106.461 - 8159.100: 0.2098% ( 17) 00:11:11.759 8159.100 - 8211.740: 0.5932% ( 53) 00:11:11.759 8211.740 - 8264.379: 1.1502% ( 77) 00:11:11.759 8264.379 - 8317.018: 2.2786% ( 156) 00:11:11.759 8317.018 - 8369.658: 3.9858% ( 236) 00:11:11.759 8369.658 - 8422.297: 6.0402% ( 284) 00:11:11.759 8422.297 - 8474.937: 8.8542% ( 389) 00:11:11.759 8474.937 - 8527.576: 12.1311% ( 453) 00:11:11.759 8527.576 - 8580.215: 16.1097% ( 550) 00:11:11.759 8580.215 - 8632.855: 20.5295% ( 611) 00:11:11.759 8632.855 - 8685.494: 25.1374% ( 637) 00:11:11.759 8685.494 - 8738.133: 30.2300% ( 704) 00:11:11.759 8738.133 - 8790.773: 35.4239% ( 718) 00:11:11.759 8790.773 - 8843.412: 41.0590% ( 779) 00:11:11.759 8843.412 - 8896.051: 46.5784% ( 763) 00:11:11.759 8896.051 - 8948.691: 52.2714% ( 787) 00:11:11.759 8948.691 - 9001.330: 57.9644% ( 787) 00:11:11.759 9001.330 - 9053.969: 63.4042% ( 752) 00:11:11.759 9053.969 - 9106.609: 68.7645% ( 741) 00:11:11.759 9106.609 - 9159.248: 73.7124% ( 684) 00:11:11.759 9159.248 - 9211.888: 78.1033% ( 607) 00:11:11.759 9211.888 - 9264.527: 81.9951% ( 538) 00:11:11.759 9264.527 - 9317.166: 85.3588% ( 465) 00:11:11.759 9317.166 - 9369.806: 88.1221% ( 382) 00:11:11.759 9369.806 - 9422.445: 90.3284% ( 305) 00:11:11.759 9422.445 - 9475.084: 92.0790% ( 242) 00:11:11.759 9475.084 - 9527.724: 93.4100% ( 184) 00:11:11.759 9527.724 - 9580.363: 94.5530% ( 158) 00:11:11.759 9580.363 - 9633.002: 95.4210% ( 120) 00:11:11.759 9633.002 - 9685.642: 96.1299% ( 98) 00:11:11.759 9685.642 - 9738.281: 96.6001% ( 65) 00:11:11.760 9738.281 - 9790.920: 96.9401% ( 47) 00:11:11.760 9790.920 - 9843.560: 97.1933% ( 35) 00:11:11.760 9843.560 - 9896.199: 97.3741% ( 25) 00:11:11.760 9896.199 - 9948.839: 97.4899% ( 16) 00:11:11.760 9948.839 - 10001.478: 97.5767% ( 12) 00:11:11.760 10001.478 - 10054.117: 97.6924% ( 16) 00:11:11.760 10054.117 - 10106.757: 97.8082% ( 16) 00:11:11.760 10106.757 - 10159.396: 97.9167% ( 15) 00:11:11.760 10159.396 - 10212.035: 98.0252% ( 15) 00:11:11.760 10212.035 - 10264.675: 98.1771% ( 21) 00:11:11.760 10264.675 - 10317.314: 98.2711% ( 13) 00:11:11.760 10317.314 - 10369.953: 98.3579% ( 12) 00:11:11.760 10369.953 - 10422.593: 98.4520% ( 13) 00:11:11.760 10422.593 - 10475.232: 98.5460% ( 13) 00:11:11.760 10475.232 - 10527.871: 98.5966% ( 7) 00:11:11.760 10527.871 - 10580.511: 98.6473% ( 7) 00:11:11.760 10580.511 - 10633.150: 98.7052% ( 8) 00:11:11.760 10633.150 - 10685.790: 98.7558% ( 7) 00:11:11.760 10685.790 - 10738.429: 98.7992% ( 6) 00:11:11.760 10738.429 - 10791.068: 98.8571% ( 8) 00:11:11.760 10791.068 - 10843.708: 98.8788% ( 3) 00:11:11.760 10843.708 - 10896.347: 98.9005% ( 3) 00:11:11.760 10896.347 - 10948.986: 98.9077% ( 1) 00:11:11.760 10948.986 - 11001.626: 98.9222% ( 2) 00:11:11.760 11001.626 - 11054.265: 98.9439% ( 3) 00:11:11.760 11054.265 - 11106.904: 98.9583% ( 2) 00:11:11.760 11106.904 - 11159.544: 98.9800% ( 3) 00:11:11.760 11159.544 - 11212.183: 99.0017% ( 3) 00:11:11.760 11212.183 - 11264.822: 99.0162% ( 2) 00:11:11.760 11264.822 - 11317.462: 99.0307% ( 2) 00:11:11.760 11317.462 - 11370.101: 99.0451% ( 2) 00:11:11.760 11370.101 - 11422.741: 99.0596% ( 2) 00:11:11.760 11422.741 - 11475.380: 99.0741% ( 2) 00:11:11.760 41269.256 - 41479.814: 99.0885% ( 2) 00:11:11.760 41479.814 - 41690.371: 99.1464% ( 8) 00:11:11.760 41690.371 - 41900.929: 99.1970% ( 7) 00:11:11.760 41900.929 - 42111.486: 99.2549% ( 8) 00:11:11.760 42111.486 - 42322.043: 99.3128% ( 8) 00:11:11.760 42322.043 - 42532.601: 99.3707% ( 8) 00:11:11.760 42532.601 - 42743.158: 99.4285% ( 8) 00:11:11.760 42743.158 - 42953.716: 99.4792% ( 7) 00:11:11.760 42953.716 - 43164.273: 99.5370% ( 8) 00:11:11.760 48007.094 - 48217.651: 99.5949% ( 8) 00:11:11.760 48217.651 - 48428.209: 99.6528% ( 8) 00:11:11.760 48428.209 - 48638.766: 99.7034% ( 7) 00:11:11.760 48638.766 - 48849.324: 99.7541% ( 7) 00:11:11.760 48849.324 - 49059.881: 99.8119% ( 8) 00:11:11.760 49059.881 - 49270.439: 99.8698% ( 8) 00:11:11.760 49270.439 - 49480.996: 99.9204% ( 7) 00:11:11.760 49480.996 - 49691.553: 99.9783% ( 8) 00:11:11.760 49691.553 - 49902.111: 100.0000% ( 3) 00:11:11.760 00:11:11.760 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:11.760 ============================================================================== 00:11:11.760 Range in us Cumulative IO count 00:11:11.760 8001.182 - 8053.822: 0.0217% ( 3) 00:11:11.760 8053.822 - 8106.461: 0.0506% ( 4) 00:11:11.760 8106.461 - 8159.100: 0.1230% ( 10) 00:11:11.760 8159.100 - 8211.740: 0.3183% ( 27) 00:11:11.760 8211.740 - 8264.379: 0.8174% ( 69) 00:11:11.760 8264.379 - 8317.018: 1.9893% ( 162) 00:11:11.760 8317.018 - 8369.658: 3.6314% ( 227) 00:11:11.760 8369.658 - 8422.297: 5.7436% ( 292) 00:11:11.760 8422.297 - 8474.937: 8.4057% ( 368) 00:11:11.760 8474.937 - 8527.576: 11.6464% ( 448) 00:11:11.760 8527.576 - 8580.215: 15.5671% ( 542) 00:11:11.760 8580.215 - 8632.855: 20.0810% ( 624) 00:11:11.760 8632.855 - 8685.494: 24.8264% ( 656) 00:11:11.760 8685.494 - 8738.133: 30.1505% ( 736) 00:11:11.760 8738.133 - 8790.773: 35.5179% ( 742) 00:11:11.760 8790.773 - 8843.412: 41.2326% ( 790) 00:11:11.760 8843.412 - 8896.051: 46.9618% ( 792) 00:11:11.760 8896.051 - 8948.691: 52.5680% ( 775) 00:11:11.760 8948.691 - 9001.330: 58.2755% ( 789) 00:11:11.760 9001.330 - 9053.969: 64.0119% ( 793) 00:11:11.760 9053.969 - 9106.609: 69.4372% ( 750) 00:11:11.760 9106.609 - 9159.248: 74.3851% ( 684) 00:11:11.760 9159.248 - 9211.888: 78.7471% ( 603) 00:11:11.760 9211.888 - 9264.527: 82.7257% ( 550) 00:11:11.760 9264.527 - 9317.166: 86.1545% ( 474) 00:11:11.760 9317.166 - 9369.806: 89.0553% ( 401) 00:11:11.760 9369.806 - 9422.445: 91.3122% ( 312) 00:11:11.760 9422.445 - 9475.084: 93.0194% ( 236) 00:11:11.760 9475.084 - 9527.724: 94.2853% ( 175) 00:11:11.760 9527.724 - 9580.363: 95.1968% ( 126) 00:11:11.760 9580.363 - 9633.002: 95.8695% ( 93) 00:11:11.760 9633.002 - 9685.642: 96.4120% ( 75) 00:11:11.760 9685.642 - 9738.281: 96.8244% ( 57) 00:11:11.760 9738.281 - 9790.920: 97.1065% ( 39) 00:11:11.760 9790.920 - 9843.560: 97.2656% ( 22) 00:11:11.760 9843.560 - 9896.199: 97.3886% ( 17) 00:11:11.760 9896.199 - 9948.839: 97.5043% ( 16) 00:11:11.760 9948.839 - 10001.478: 97.6056% ( 14) 00:11:11.760 10001.478 - 10054.117: 97.7286% ( 17) 00:11:11.760 10054.117 - 10106.757: 97.8516% ( 17) 00:11:11.760 10106.757 - 10159.396: 97.9456% ( 13) 00:11:11.760 10159.396 - 10212.035: 98.0469% ( 14) 00:11:11.760 10212.035 - 10264.675: 98.1337% ( 12) 00:11:11.760 10264.675 - 10317.314: 98.2133% ( 11) 00:11:11.760 10317.314 - 10369.953: 98.2928% ( 11) 00:11:11.760 10369.953 - 10422.593: 98.3507% ( 8) 00:11:11.760 10422.593 - 10475.232: 98.4230% ( 10) 00:11:11.760 10475.232 - 10527.871: 98.4881% ( 9) 00:11:11.760 10527.871 - 10580.511: 98.5605% ( 10) 00:11:11.760 10580.511 - 10633.150: 98.6400% ( 11) 00:11:11.760 10633.150 - 10685.790: 98.7196% ( 11) 00:11:11.760 10685.790 - 10738.429: 98.7920% ( 10) 00:11:11.760 10738.429 - 10791.068: 98.8354% ( 6) 00:11:11.760 10791.068 - 10843.708: 98.8788% ( 6) 00:11:11.760 10843.708 - 10896.347: 98.9222% ( 6) 00:11:11.760 10896.347 - 10948.986: 98.9439% ( 3) 00:11:11.760 10948.986 - 11001.626: 98.9583% ( 2) 00:11:11.760 11001.626 - 11054.265: 98.9728% ( 2) 00:11:11.760 11054.265 - 11106.904: 98.9945% ( 3) 00:11:11.760 11106.904 - 11159.544: 99.0090% ( 2) 00:11:11.760 11159.544 - 11212.183: 99.0307% ( 3) 00:11:11.760 11212.183 - 11264.822: 99.0524% ( 3) 00:11:11.760 11264.822 - 11317.462: 99.0668% ( 2) 00:11:11.760 11317.462 - 11370.101: 99.0741% ( 1) 00:11:11.760 40005.912 - 40216.469: 99.1030% ( 4) 00:11:11.760 40216.469 - 40427.027: 99.1536% ( 7) 00:11:11.760 40427.027 - 40637.584: 99.2115% ( 8) 00:11:11.760 40637.584 - 40848.141: 99.2549% ( 6) 00:11:11.760 40848.141 - 41058.699: 99.3056% ( 7) 00:11:11.760 41058.699 - 41269.256: 99.3634% ( 8) 00:11:11.760 41269.256 - 41479.814: 99.4285% ( 9) 00:11:11.760 41479.814 - 41690.371: 99.4792% ( 7) 00:11:11.760 41690.371 - 41900.929: 99.5370% ( 8) 00:11:11.760 46533.192 - 46743.749: 99.5443% ( 1) 00:11:11.760 46743.749 - 46954.307: 99.5949% ( 7) 00:11:11.760 46954.307 - 47164.864: 99.6528% ( 8) 00:11:11.760 47164.864 - 47375.422: 99.6962% ( 6) 00:11:11.760 47375.422 - 47585.979: 99.7541% ( 8) 00:11:11.760 47585.979 - 47796.537: 99.8119% ( 8) 00:11:11.760 47796.537 - 48007.094: 99.8698% ( 8) 00:11:11.760 48007.094 - 48217.651: 99.9277% ( 8) 00:11:11.760 48217.651 - 48428.209: 99.9855% ( 8) 00:11:11.760 48428.209 - 48638.766: 100.0000% ( 2) 00:11:11.760 00:11:11.760 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:11.760 ============================================================================== 00:11:11.760 Range in us Cumulative IO count 00:11:11.760 8053.822 - 8106.461: 0.0145% ( 2) 00:11:11.760 8106.461 - 8159.100: 0.0796% ( 9) 00:11:11.760 8159.100 - 8211.740: 0.2894% ( 29) 00:11:11.760 8211.740 - 8264.379: 0.8536% ( 78) 00:11:11.760 8264.379 - 8317.018: 1.7795% ( 128) 00:11:11.760 8317.018 - 8369.658: 3.2914% ( 209) 00:11:11.760 8369.658 - 8422.297: 5.3819% ( 289) 00:11:11.760 8422.297 - 8474.937: 8.1380% ( 381) 00:11:11.760 8474.937 - 8527.576: 11.6247% ( 482) 00:11:11.760 8527.576 - 8580.215: 15.6395% ( 555) 00:11:11.760 8580.215 - 8632.855: 20.2402% ( 636) 00:11:11.760 8632.855 - 8685.494: 25.2459% ( 692) 00:11:11.760 8685.494 - 8738.133: 30.4470% ( 719) 00:11:11.760 8738.133 - 8790.773: 35.9158% ( 756) 00:11:11.760 8790.773 - 8843.412: 41.4714% ( 768) 00:11:11.760 8843.412 - 8896.051: 47.1644% ( 787) 00:11:11.760 8896.051 - 8948.691: 52.7488% ( 772) 00:11:11.760 8948.691 - 9001.330: 58.5503% ( 802) 00:11:11.760 9001.330 - 9053.969: 64.1059% ( 768) 00:11:11.760 9053.969 - 9106.609: 69.4083% ( 733) 00:11:11.760 9106.609 - 9159.248: 74.4719% ( 700) 00:11:11.760 9159.248 - 9211.888: 79.0292% ( 630) 00:11:11.760 9211.888 - 9264.527: 83.0729% ( 559) 00:11:11.760 9264.527 - 9317.166: 86.3643% ( 455) 00:11:11.760 9317.166 - 9369.806: 89.1565% ( 386) 00:11:11.760 9369.806 - 9422.445: 91.3339% ( 301) 00:11:11.760 9422.445 - 9475.084: 92.9036% ( 217) 00:11:11.760 9475.084 - 9527.724: 94.0466% ( 158) 00:11:11.760 9527.724 - 9580.363: 95.0231% ( 135) 00:11:11.760 9580.363 - 9633.002: 95.6597% ( 88) 00:11:11.760 9633.002 - 9685.642: 96.1444% ( 67) 00:11:11.760 9685.642 - 9738.281: 96.5278% ( 53) 00:11:11.760 9738.281 - 9790.920: 96.8605% ( 46) 00:11:11.760 9790.920 - 9843.560: 97.1427% ( 39) 00:11:11.760 9843.560 - 9896.199: 97.3380% ( 27) 00:11:11.760 9896.199 - 9948.839: 97.4826% ( 20) 00:11:11.760 9948.839 - 10001.478: 97.5839% ( 14) 00:11:11.760 10001.478 - 10054.117: 97.6780% ( 13) 00:11:11.760 10054.117 - 10106.757: 97.7720% ( 13) 00:11:11.760 10106.757 - 10159.396: 97.8733% ( 14) 00:11:11.760 10159.396 - 10212.035: 97.9239% ( 7) 00:11:11.760 10212.035 - 10264.675: 97.9745% ( 7) 00:11:11.760 10264.675 - 10317.314: 98.0324% ( 8) 00:11:11.760 10317.314 - 10369.953: 98.0975% ( 9) 00:11:11.760 10369.953 - 10422.593: 98.1698% ( 10) 00:11:11.760 10422.593 - 10475.232: 98.2422% ( 10) 00:11:11.760 10475.232 - 10527.871: 98.3218% ( 11) 00:11:11.760 10527.871 - 10580.511: 98.3941% ( 10) 00:11:11.760 10580.511 - 10633.150: 98.4737% ( 11) 00:11:11.760 10633.150 - 10685.790: 98.5532% ( 11) 00:11:11.760 10685.790 - 10738.429: 98.6328% ( 11) 00:11:11.760 10738.429 - 10791.068: 98.7124% ( 11) 00:11:11.760 10791.068 - 10843.708: 98.7558% ( 6) 00:11:11.760 10843.708 - 10896.347: 98.7992% ( 6) 00:11:11.761 10896.347 - 10948.986: 98.8209% ( 3) 00:11:11.761 10948.986 - 11001.626: 98.8281% ( 1) 00:11:11.761 11001.626 - 11054.265: 98.8426% ( 2) 00:11:11.761 11054.265 - 11106.904: 98.8643% ( 3) 00:11:11.761 11106.904 - 11159.544: 98.8788% ( 2) 00:11:11.761 11159.544 - 11212.183: 98.8932% ( 2) 00:11:11.761 11212.183 - 11264.822: 98.9149% ( 3) 00:11:11.761 11264.822 - 11317.462: 98.9366% ( 3) 00:11:11.761 11317.462 - 11370.101: 98.9511% ( 2) 00:11:11.761 11370.101 - 11422.741: 98.9728% ( 3) 00:11:11.761 11422.741 - 11475.380: 98.9873% ( 2) 00:11:11.761 11475.380 - 11528.019: 99.0017% ( 2) 00:11:11.761 11528.019 - 11580.659: 99.0234% ( 3) 00:11:11.761 11580.659 - 11633.298: 99.0451% ( 3) 00:11:11.761 11633.298 - 11685.937: 99.0596% ( 2) 00:11:11.761 11685.937 - 11738.577: 99.0741% ( 2) 00:11:11.761 37900.337 - 38110.895: 99.0813% ( 1) 00:11:11.761 38110.895 - 38321.452: 99.1319% ( 7) 00:11:11.761 38321.452 - 38532.010: 99.1898% ( 8) 00:11:11.761 38532.010 - 38742.567: 99.2405% ( 7) 00:11:11.761 38742.567 - 38953.124: 99.2911% ( 7) 00:11:11.761 38953.124 - 39163.682: 99.3562% ( 9) 00:11:11.761 39163.682 - 39374.239: 99.4068% ( 7) 00:11:11.761 39374.239 - 39584.797: 99.4647% ( 8) 00:11:11.761 39584.797 - 39795.354: 99.5009% ( 5) 00:11:11.761 39795.354 - 40005.912: 99.5370% ( 5) 00:11:11.761 44638.175 - 44848.733: 99.5804% ( 6) 00:11:11.761 44848.733 - 45059.290: 99.6383% ( 8) 00:11:11.761 45059.290 - 45269.847: 99.6889% ( 7) 00:11:11.761 45269.847 - 45480.405: 99.7396% ( 7) 00:11:11.761 45480.405 - 45690.962: 99.7975% ( 8) 00:11:11.761 45690.962 - 45901.520: 99.8481% ( 7) 00:11:11.761 45901.520 - 46112.077: 99.9060% ( 8) 00:11:11.761 46112.077 - 46322.635: 99.9566% ( 7) 00:11:11.761 46322.635 - 46533.192: 100.0000% ( 6) 00:11:11.761 00:11:11.761 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:11.761 ============================================================================== 00:11:11.761 Range in us Cumulative IO count 00:11:11.761 8053.822 - 8106.461: 0.0217% ( 3) 00:11:11.761 8106.461 - 8159.100: 0.0868% ( 9) 00:11:11.761 8159.100 - 8211.740: 0.2532% ( 23) 00:11:11.761 8211.740 - 8264.379: 0.7740% ( 72) 00:11:11.761 8264.379 - 8317.018: 1.8519% ( 149) 00:11:11.761 8317.018 - 8369.658: 3.2335% ( 191) 00:11:11.761 8369.658 - 8422.297: 5.3964% ( 299) 00:11:11.761 8422.297 - 8474.937: 8.2176% ( 390) 00:11:11.761 8474.937 - 8527.576: 11.6753% ( 478) 00:11:11.761 8527.576 - 8580.215: 15.7407% ( 562) 00:11:11.761 8580.215 - 8632.855: 20.2980% ( 630) 00:11:11.761 8632.855 - 8685.494: 25.3545% ( 699) 00:11:11.761 8685.494 - 8738.133: 30.4977% ( 711) 00:11:11.761 8738.133 - 8790.773: 35.9592% ( 755) 00:11:11.761 8790.773 - 8843.412: 41.4858% ( 764) 00:11:11.761 8843.412 - 8896.051: 47.0775% ( 773) 00:11:11.761 8896.051 - 8948.691: 52.7705% ( 787) 00:11:11.761 8948.691 - 9001.330: 58.3623% ( 773) 00:11:11.761 9001.330 - 9053.969: 64.0553% ( 787) 00:11:11.761 9053.969 - 9106.609: 69.4372% ( 744) 00:11:11.761 9106.609 - 9159.248: 74.3490% ( 679) 00:11:11.761 9159.248 - 9211.888: 78.9062% ( 630) 00:11:11.761 9211.888 - 9264.527: 83.0006% ( 566) 00:11:11.761 9264.527 - 9317.166: 86.4439% ( 476) 00:11:11.761 9317.166 - 9369.806: 89.1493% ( 374) 00:11:11.761 9369.806 - 9422.445: 91.2833% ( 295) 00:11:11.761 9422.445 - 9475.084: 92.9109% ( 225) 00:11:11.761 9475.084 - 9527.724: 94.0611% ( 159) 00:11:11.761 9527.724 - 9580.363: 94.9436% ( 122) 00:11:11.761 9580.363 - 9633.002: 95.6091% ( 92) 00:11:11.761 9633.002 - 9685.642: 96.1372% ( 73) 00:11:11.761 9685.642 - 9738.281: 96.5061% ( 51) 00:11:11.761 9738.281 - 9790.920: 96.8027% ( 41) 00:11:11.761 9790.920 - 9843.560: 97.0414% ( 33) 00:11:11.761 9843.560 - 9896.199: 97.2367% ( 27) 00:11:11.761 9896.199 - 9948.839: 97.3886% ( 21) 00:11:11.761 9948.839 - 10001.478: 97.5116% ( 17) 00:11:11.761 10001.478 - 10054.117: 97.5839% ( 10) 00:11:11.761 10054.117 - 10106.757: 97.6780% ( 13) 00:11:11.761 10106.757 - 10159.396: 97.7720% ( 13) 00:11:11.761 10159.396 - 10212.035: 97.8371% ( 9) 00:11:11.761 10212.035 - 10264.675: 97.9167% ( 11) 00:11:11.761 10264.675 - 10317.314: 97.9890% ( 10) 00:11:11.761 10317.314 - 10369.953: 98.0541% ( 9) 00:11:11.761 10369.953 - 10422.593: 98.1192% ( 9) 00:11:11.761 10422.593 - 10475.232: 98.1771% ( 8) 00:11:11.761 10475.232 - 10527.871: 98.2422% ( 9) 00:11:11.761 10527.871 - 10580.511: 98.3001% ( 8) 00:11:11.761 10580.511 - 10633.150: 98.3579% ( 8) 00:11:11.761 10633.150 - 10685.790: 98.4158% ( 8) 00:11:11.761 10685.790 - 10738.429: 98.4881% ( 10) 00:11:11.761 10738.429 - 10791.068: 98.5677% ( 11) 00:11:11.761 10791.068 - 10843.708: 98.6256% ( 8) 00:11:11.761 10843.708 - 10896.347: 98.6762% ( 7) 00:11:11.761 10896.347 - 10948.986: 98.6907% ( 2) 00:11:11.761 10948.986 - 11001.626: 98.7052% ( 2) 00:11:11.761 11001.626 - 11054.265: 98.7269% ( 3) 00:11:11.761 11054.265 - 11106.904: 98.7486% ( 3) 00:11:11.761 11106.904 - 11159.544: 98.7630% ( 2) 00:11:11.761 11159.544 - 11212.183: 98.7775% ( 2) 00:11:11.761 11212.183 - 11264.822: 98.8064% ( 4) 00:11:11.761 11264.822 - 11317.462: 98.8281% ( 3) 00:11:11.761 11317.462 - 11370.101: 98.8498% ( 3) 00:11:11.761 11370.101 - 11422.741: 98.8643% ( 2) 00:11:11.761 11422.741 - 11475.380: 98.8860% ( 3) 00:11:11.761 11475.380 - 11528.019: 98.9005% ( 2) 00:11:11.761 11528.019 - 11580.659: 98.9149% ( 2) 00:11:11.761 11580.659 - 11633.298: 98.9294% ( 2) 00:11:11.761 11633.298 - 11685.937: 98.9511% ( 3) 00:11:11.761 11685.937 - 11738.577: 98.9656% ( 2) 00:11:11.761 11738.577 - 11791.216: 98.9800% ( 2) 00:11:11.761 11791.216 - 11843.855: 99.0017% ( 3) 00:11:11.761 11843.855 - 11896.495: 99.0162% ( 2) 00:11:11.761 11896.495 - 11949.134: 99.0307% ( 2) 00:11:11.761 11949.134 - 12001.773: 99.0451% ( 2) 00:11:11.761 12001.773 - 12054.413: 99.0668% ( 3) 00:11:11.761 12054.413 - 12107.052: 99.0741% ( 1) 00:11:11.761 36005.320 - 36215.878: 99.1175% ( 6) 00:11:11.761 36215.878 - 36426.435: 99.1609% ( 6) 00:11:11.761 36426.435 - 36636.993: 99.2115% ( 7) 00:11:11.761 36636.993 - 36847.550: 99.2694% ( 8) 00:11:11.761 36847.550 - 37058.108: 99.3200% ( 7) 00:11:11.761 37058.108 - 37268.665: 99.3779% ( 8) 00:11:11.761 37268.665 - 37479.222: 99.4285% ( 7) 00:11:11.761 37479.222 - 37689.780: 99.4792% ( 7) 00:11:11.761 37689.780 - 37900.337: 99.5370% ( 8) 00:11:11.761 42532.601 - 42743.158: 99.5877% ( 7) 00:11:11.761 42743.158 - 42953.716: 99.6383% ( 7) 00:11:11.761 42953.716 - 43164.273: 99.6817% ( 6) 00:11:11.761 43164.273 - 43374.831: 99.7396% ( 8) 00:11:11.761 43374.831 - 43585.388: 99.7902% ( 7) 00:11:11.761 43585.388 - 43795.945: 99.8481% ( 8) 00:11:11.761 43795.945 - 44006.503: 99.8987% ( 7) 00:11:11.761 44006.503 - 44217.060: 99.9638% ( 9) 00:11:11.761 44217.060 - 44427.618: 100.0000% ( 5) 00:11:11.761 00:11:11.761 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:11.761 ============================================================================== 00:11:11.761 Range in us Cumulative IO count 00:11:11.761 8053.822 - 8106.461: 0.0576% ( 8) 00:11:11.761 8106.461 - 8159.100: 0.2160% ( 22) 00:11:11.761 8159.100 - 8211.740: 0.5256% ( 43) 00:11:11.761 8211.740 - 8264.379: 1.1233% ( 83) 00:11:11.761 8264.379 - 8317.018: 2.1601% ( 144) 00:11:11.761 8317.018 - 8369.658: 3.7370% ( 219) 00:11:11.761 8369.658 - 8422.297: 5.6236% ( 262) 00:11:11.761 8422.297 - 8474.937: 8.4317% ( 390) 00:11:11.761 8474.937 - 8527.576: 11.7944% ( 467) 00:11:11.761 8527.576 - 8580.215: 15.7762% ( 553) 00:11:11.761 8580.215 - 8632.855: 20.3485% ( 635) 00:11:11.761 8632.855 - 8685.494: 25.1728% ( 670) 00:11:11.761 8685.494 - 8738.133: 30.3427% ( 718) 00:11:11.761 8738.133 - 8790.773: 35.6639% ( 739) 00:11:11.761 8790.773 - 8843.412: 41.2082% ( 770) 00:11:11.761 8843.412 - 8896.051: 46.7958% ( 776) 00:11:11.761 8896.051 - 8948.691: 52.3834% ( 776) 00:11:11.761 8948.691 - 9001.330: 58.1005% ( 794) 00:11:11.761 9001.330 - 9053.969: 63.5945% ( 763) 00:11:11.761 9053.969 - 9106.609: 68.8148% ( 725) 00:11:11.761 9106.609 - 9159.248: 73.8191% ( 695) 00:11:11.761 9159.248 - 9211.888: 78.3266% ( 626) 00:11:11.761 9211.888 - 9264.527: 82.4453% ( 572) 00:11:11.761 9264.527 - 9317.166: 85.6639% ( 447) 00:11:11.761 9317.166 - 9369.806: 88.3857% ( 378) 00:11:11.761 9369.806 - 9422.445: 90.5674% ( 303) 00:11:11.761 9422.445 - 9475.084: 92.1947% ( 226) 00:11:11.761 9475.084 - 9527.724: 93.4404% ( 173) 00:11:11.761 9527.724 - 9580.363: 94.3908% ( 132) 00:11:11.761 9580.363 - 9633.002: 95.1181% ( 101) 00:11:11.761 9633.002 - 9685.642: 95.6509% ( 74) 00:11:11.761 9685.642 - 9738.281: 96.0901% ( 61) 00:11:11.761 9738.281 - 9790.920: 96.4070% ( 44) 00:11:11.761 9790.920 - 9843.560: 96.6590% ( 35) 00:11:11.761 9843.560 - 9896.199: 96.8534% ( 27) 00:11:11.761 9896.199 - 9948.839: 96.9830% ( 18) 00:11:11.761 9948.839 - 10001.478: 97.1270% ( 20) 00:11:11.761 10001.478 - 10054.117: 97.2782% ( 21) 00:11:11.761 10054.117 - 10106.757: 97.3862% ( 15) 00:11:11.761 10106.757 - 10159.396: 97.4942% ( 15) 00:11:11.761 10159.396 - 10212.035: 97.5950% ( 14) 00:11:11.761 10212.035 - 10264.675: 97.7103% ( 16) 00:11:11.761 10264.675 - 10317.314: 97.8255% ( 16) 00:11:11.761 10317.314 - 10369.953: 97.9191% ( 13) 00:11:11.761 10369.953 - 10422.593: 98.0271% ( 15) 00:11:11.761 10422.593 - 10475.232: 98.1423% ( 16) 00:11:11.761 10475.232 - 10527.871: 98.2215% ( 11) 00:11:11.761 10527.871 - 10580.511: 98.3007% ( 11) 00:11:11.761 10580.511 - 10633.150: 98.3799% ( 11) 00:11:11.761 10633.150 - 10685.790: 98.4447% ( 9) 00:11:11.761 10685.790 - 10738.429: 98.5167% ( 10) 00:11:11.761 10738.429 - 10791.068: 98.5743% ( 8) 00:11:11.762 10791.068 - 10843.708: 98.6103% ( 5) 00:11:11.762 10843.708 - 10896.347: 98.6175% ( 1) 00:11:11.762 11001.626 - 11054.265: 98.6319% ( 2) 00:11:11.762 11054.265 - 11106.904: 98.6463% ( 2) 00:11:11.762 11106.904 - 11159.544: 98.6607% ( 2) 00:11:11.762 11159.544 - 11212.183: 98.6823% ( 3) 00:11:11.762 11212.183 - 11264.822: 98.6967% ( 2) 00:11:11.762 11264.822 - 11317.462: 98.7183% ( 3) 00:11:11.762 11317.462 - 11370.101: 98.7399% ( 3) 00:11:11.762 11370.101 - 11422.741: 98.7471% ( 1) 00:11:11.762 11422.741 - 11475.380: 98.7687% ( 3) 00:11:11.762 11475.380 - 11528.019: 98.7831% ( 2) 00:11:11.762 11528.019 - 11580.659: 98.7975% ( 2) 00:11:11.762 11580.659 - 11633.298: 98.8191% ( 3) 00:11:11.762 11633.298 - 11685.937: 98.8335% ( 2) 00:11:11.762 11685.937 - 11738.577: 98.8479% ( 2) 00:11:11.762 11738.577 - 11791.216: 98.8623% ( 2) 00:11:11.762 11791.216 - 11843.855: 98.8767% ( 2) 00:11:11.762 11843.855 - 11896.495: 98.8983% ( 3) 00:11:11.762 11896.495 - 11949.134: 98.9199% ( 3) 00:11:11.762 11949.134 - 12001.773: 98.9343% ( 2) 00:11:11.762 12001.773 - 12054.413: 98.9487% ( 2) 00:11:11.762 12054.413 - 12107.052: 98.9703% ( 3) 00:11:11.762 12107.052 - 12159.692: 98.9847% ( 2) 00:11:11.762 12159.692 - 12212.331: 98.9991% ( 2) 00:11:11.762 12212.331 - 12264.970: 99.0135% ( 2) 00:11:11.762 12264.970 - 12317.610: 99.0279% ( 2) 00:11:11.762 12317.610 - 12370.249: 99.0495% ( 3) 00:11:11.762 12370.249 - 12422.888: 99.0639% ( 2) 00:11:11.762 12422.888 - 12475.528: 99.0783% ( 2) 00:11:11.762 28846.368 - 29056.925: 99.1071% ( 4) 00:11:11.762 29056.925 - 29267.483: 99.1647% ( 8) 00:11:11.762 29267.483 - 29478.040: 99.2151% ( 7) 00:11:11.762 29478.040 - 29688.598: 99.2728% ( 8) 00:11:11.762 29688.598 - 29899.155: 99.3304% ( 8) 00:11:11.762 29899.155 - 30109.712: 99.3952% ( 9) 00:11:11.762 30109.712 - 30320.270: 99.4456% ( 7) 00:11:11.762 30320.270 - 30530.827: 99.5032% ( 8) 00:11:11.762 30530.827 - 30741.385: 99.5392% ( 5) 00:11:11.762 35373.648 - 35584.206: 99.5896% ( 7) 00:11:11.762 35584.206 - 35794.763: 99.6400% ( 7) 00:11:11.762 35794.763 - 36005.320: 99.6904% ( 7) 00:11:11.762 36005.320 - 36215.878: 99.7408% ( 7) 00:11:11.762 36215.878 - 36426.435: 99.7912% ( 7) 00:11:11.762 36426.435 - 36636.993: 99.8488% ( 8) 00:11:11.762 36636.993 - 36847.550: 99.8992% ( 7) 00:11:11.762 36847.550 - 37058.108: 99.9496% ( 7) 00:11:11.762 37058.108 - 37268.665: 100.0000% ( 7) 00:11:11.762 00:11:11.762 11:18:38 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:13.140 Initializing NVMe Controllers 00:11:13.140 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:13.140 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:13.140 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:13.140 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:13.140 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:13.140 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:13.140 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:13.140 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:13.140 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:13.140 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:13.140 Initialization complete. Launching workers. 00:11:13.140 ======================================================== 00:11:13.140 Latency(us) 00:11:13.140 Device Information : IOPS MiB/s Average min max 00:11:13.140 PCIE (0000:00:10.0) NSID 1 from core 0: 10833.83 126.96 11841.77 8291.10 46557.63 00:11:13.140 PCIE (0000:00:11.0) NSID 1 from core 0: 10833.83 126.96 11822.04 8537.77 45129.69 00:11:13.140 PCIE (0000:00:13.0) NSID 1 from core 0: 10833.83 126.96 11801.97 8188.14 44557.71 00:11:13.140 PCIE (0000:00:12.0) NSID 1 from core 0: 10833.83 126.96 11783.66 8532.36 43299.39 00:11:13.140 PCIE (0000:00:12.0) NSID 2 from core 0: 10833.83 126.96 11765.25 8507.76 42037.58 00:11:13.140 PCIE (0000:00:12.0) NSID 3 from core 0: 10833.83 126.96 11745.48 8557.28 40596.64 00:11:13.140 ======================================================== 00:11:13.140 Total : 65002.96 761.75 11793.36 8188.14 46557.63 00:11:13.140 00:11:13.140 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:13.140 ================================================================================= 00:11:13.140 1.00000% : 8896.051us 00:11:13.140 10.00000% : 9317.166us 00:11:13.140 25.00000% : 9633.002us 00:11:13.140 50.00000% : 10264.675us 00:11:13.140 75.00000% : 12896.643us 00:11:13.140 90.00000% : 15897.086us 00:11:13.140 95.00000% : 18002.660us 00:11:13.140 98.00000% : 20108.235us 00:11:13.140 99.00000% : 32425.844us 00:11:13.140 99.50000% : 44217.060us 00:11:13.140 99.90000% : 46112.077us 00:11:13.140 99.99000% : 46533.192us 00:11:13.140 99.99900% : 46743.749us 00:11:13.140 99.99990% : 46743.749us 00:11:13.140 99.99999% : 46743.749us 00:11:13.140 00:11:13.140 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:13.140 ================================================================================= 00:11:13.140 1.00000% : 8948.691us 00:11:13.140 10.00000% : 9317.166us 00:11:13.140 25.00000% : 9633.002us 00:11:13.140 50.00000% : 10264.675us 00:11:13.140 75.00000% : 12896.643us 00:11:13.140 90.00000% : 15897.086us 00:11:13.140 95.00000% : 18213.218us 00:11:13.140 98.00000% : 19581.841us 00:11:13.140 99.00000% : 32425.844us 00:11:13.140 99.50000% : 42953.716us 00:11:13.140 99.90000% : 44848.733us 00:11:13.140 99.99000% : 45269.847us 00:11:13.140 99.99900% : 45269.847us 00:11:13.140 99.99990% : 45269.847us 00:11:13.140 99.99999% : 45269.847us 00:11:13.140 00:11:13.140 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:13.140 ================================================================================= 00:11:13.140 1.00000% : 8896.051us 00:11:13.140 10.00000% : 9369.806us 00:11:13.140 25.00000% : 9633.002us 00:11:13.140 50.00000% : 10264.675us 00:11:13.140 75.00000% : 12896.643us 00:11:13.140 90.00000% : 15791.807us 00:11:13.140 95.00000% : 17897.382us 00:11:13.140 98.00000% : 20002.956us 00:11:13.140 99.00000% : 31794.172us 00:11:13.140 99.50000% : 42322.043us 00:11:13.140 99.90000% : 44217.060us 00:11:13.140 99.99000% : 44638.175us 00:11:13.140 99.99900% : 44638.175us 00:11:13.140 99.99990% : 44638.175us 00:11:13.140 99.99999% : 44638.175us 00:11:13.140 00:11:13.140 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:13.140 ================================================================================= 00:11:13.140 1.00000% : 8948.691us 00:11:13.140 10.00000% : 9369.806us 00:11:13.140 25.00000% : 9633.002us 00:11:13.140 50.00000% : 10264.675us 00:11:13.140 75.00000% : 13159.839us 00:11:13.140 90.00000% : 16107.643us 00:11:13.140 95.00000% : 17581.545us 00:11:13.140 98.00000% : 20002.956us 00:11:13.140 99.00000% : 30530.827us 00:11:13.140 99.50000% : 41269.256us 00:11:13.140 99.90000% : 42953.716us 00:11:13.140 99.99000% : 43374.831us 00:11:13.140 99.99900% : 43374.831us 00:11:13.140 99.99990% : 43374.831us 00:11:13.140 99.99999% : 43374.831us 00:11:13.140 00:11:13.140 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:13.140 ================================================================================= 00:11:13.140 1.00000% : 8948.691us 00:11:13.140 10.00000% : 9369.806us 00:11:13.140 25.00000% : 9685.642us 00:11:13.140 50.00000% : 10264.675us 00:11:13.140 75.00000% : 13159.839us 00:11:13.140 90.00000% : 16107.643us 00:11:13.140 95.00000% : 17792.103us 00:11:13.140 98.00000% : 19476.562us 00:11:13.140 99.00000% : 29899.155us 00:11:13.140 99.50000% : 38742.567us 00:11:13.140 99.90000% : 41690.371us 00:11:13.140 99.99000% : 42111.486us 00:11:13.140 99.99900% : 42111.486us 00:11:13.140 99.99990% : 42111.486us 00:11:13.141 99.99999% : 42111.486us 00:11:13.141 00:11:13.141 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:13.141 ================================================================================= 00:11:13.141 1.00000% : 8896.051us 00:11:13.141 10.00000% : 9369.806us 00:11:13.141 25.00000% : 9685.642us 00:11:13.141 50.00000% : 10317.314us 00:11:13.141 75.00000% : 13054.561us 00:11:13.141 90.00000% : 15897.086us 00:11:13.141 95.00000% : 17476.267us 00:11:13.141 98.00000% : 19371.284us 00:11:13.141 99.00000% : 29056.925us 00:11:13.141 99.50000% : 37900.337us 00:11:13.141 99.90000% : 40216.469us 00:11:13.141 99.99000% : 40637.584us 00:11:13.141 99.99900% : 40637.584us 00:11:13.141 99.99990% : 40637.584us 00:11:13.141 99.99999% : 40637.584us 00:11:13.141 00:11:13.141 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:13.141 ============================================================================== 00:11:13.141 Range in us Cumulative IO count 00:11:13.141 8264.379 - 8317.018: 0.0092% ( 1) 00:11:13.141 8422.297 - 8474.937: 0.0184% ( 1) 00:11:13.141 8474.937 - 8527.576: 0.0643% ( 5) 00:11:13.141 8527.576 - 8580.215: 0.2849% ( 24) 00:11:13.141 8580.215 - 8632.855: 0.3033% ( 2) 00:11:13.141 8632.855 - 8685.494: 0.3217% ( 2) 00:11:13.141 8685.494 - 8738.133: 0.4136% ( 10) 00:11:13.141 8738.133 - 8790.773: 0.6618% ( 27) 00:11:13.141 8790.773 - 8843.412: 0.9191% ( 28) 00:11:13.141 8843.412 - 8896.051: 1.2500% ( 36) 00:11:13.141 8896.051 - 8948.691: 1.9945% ( 81) 00:11:13.141 8948.691 - 9001.330: 2.7574% ( 83) 00:11:13.141 9001.330 - 9053.969: 3.6857% ( 101) 00:11:13.141 9053.969 - 9106.609: 4.7518% ( 116) 00:11:13.141 9106.609 - 9159.248: 5.9191% ( 127) 00:11:13.141 9159.248 - 9211.888: 7.3162% ( 152) 00:11:13.141 9211.888 - 9264.527: 8.7776% ( 159) 00:11:13.141 9264.527 - 9317.166: 10.4596% ( 183) 00:11:13.141 9317.166 - 9369.806: 12.3346% ( 204) 00:11:13.141 9369.806 - 9422.445: 14.4485% ( 230) 00:11:13.141 9422.445 - 9475.084: 16.9853% ( 276) 00:11:13.141 9475.084 - 9527.724: 20.3217% ( 363) 00:11:13.141 9527.724 - 9580.363: 22.9044% ( 281) 00:11:13.141 9580.363 - 9633.002: 25.4688% ( 279) 00:11:13.141 9633.002 - 9685.642: 27.8033% ( 254) 00:11:13.141 9685.642 - 9738.281: 30.5699% ( 301) 00:11:13.141 9738.281 - 9790.920: 33.0607% ( 271) 00:11:13.141 9790.920 - 9843.560: 35.3860% ( 253) 00:11:13.141 9843.560 - 9896.199: 37.5735% ( 238) 00:11:13.141 9896.199 - 9948.839: 39.6783% ( 229) 00:11:13.141 9948.839 - 10001.478: 41.7555% ( 226) 00:11:13.141 10001.478 - 10054.117: 43.6121% ( 202) 00:11:13.141 10054.117 - 10106.757: 45.4779% ( 203) 00:11:13.141 10106.757 - 10159.396: 46.9761% ( 163) 00:11:13.141 10159.396 - 10212.035: 48.4835% ( 164) 00:11:13.141 10212.035 - 10264.675: 50.1471% ( 181) 00:11:13.141 10264.675 - 10317.314: 51.7463% ( 174) 00:11:13.141 10317.314 - 10369.953: 53.1342% ( 151) 00:11:13.141 10369.953 - 10422.593: 54.2279% ( 119) 00:11:13.141 10422.593 - 10475.232: 55.1195% ( 97) 00:11:13.141 10475.232 - 10527.871: 56.0754% ( 104) 00:11:13.141 10527.871 - 10580.511: 57.0956% ( 111) 00:11:13.141 10580.511 - 10633.150: 57.9596% ( 94) 00:11:13.141 10633.150 - 10685.790: 58.8419% ( 96) 00:11:13.141 10685.790 - 10738.429: 59.8438% ( 109) 00:11:13.141 10738.429 - 10791.068: 60.5239% ( 74) 00:11:13.141 10791.068 - 10843.708: 60.9651% ( 48) 00:11:13.141 10843.708 - 10896.347: 61.3971% ( 47) 00:11:13.141 10896.347 - 10948.986: 61.7463% ( 38) 00:11:13.141 10948.986 - 11001.626: 62.1967% ( 49) 00:11:13.141 11001.626 - 11054.265: 62.6195% ( 46) 00:11:13.141 11054.265 - 11106.904: 62.9871% ( 40) 00:11:13.141 11106.904 - 11159.544: 63.2812% ( 32) 00:11:13.141 11159.544 - 11212.183: 63.7684% ( 53) 00:11:13.141 11212.183 - 11264.822: 64.2923% ( 57) 00:11:13.141 11264.822 - 11317.462: 64.8989% ( 66) 00:11:13.141 11317.462 - 11370.101: 65.4136% ( 56) 00:11:13.141 11370.101 - 11422.741: 65.7721% ( 39) 00:11:13.141 11422.741 - 11475.380: 66.1489% ( 41) 00:11:13.141 11475.380 - 11528.019: 66.4062% ( 28) 00:11:13.141 11528.019 - 11580.659: 66.7096% ( 33) 00:11:13.141 11580.659 - 11633.298: 67.0404% ( 36) 00:11:13.141 11633.298 - 11685.937: 67.3438% ( 33) 00:11:13.141 11685.937 - 11738.577: 67.6562% ( 34) 00:11:13.141 11738.577 - 11791.216: 68.0331% ( 41) 00:11:13.141 11791.216 - 11843.855: 68.3732% ( 37) 00:11:13.141 11843.855 - 11896.495: 68.6857% ( 34) 00:11:13.141 11896.495 - 11949.134: 68.8879% ( 22) 00:11:13.141 11949.134 - 12001.773: 69.2004% ( 34) 00:11:13.141 12001.773 - 12054.413: 69.6415% ( 48) 00:11:13.141 12054.413 - 12107.052: 70.0551% ( 45) 00:11:13.141 12107.052 - 12159.692: 70.4320% ( 41) 00:11:13.141 12159.692 - 12212.331: 70.7353% ( 33) 00:11:13.141 12212.331 - 12264.970: 70.9835% ( 27) 00:11:13.141 12264.970 - 12317.610: 71.2408% ( 28) 00:11:13.141 12317.610 - 12370.249: 71.4982% ( 28) 00:11:13.141 12370.249 - 12422.888: 71.8290% ( 36) 00:11:13.141 12422.888 - 12475.528: 72.0864% ( 28) 00:11:13.141 12475.528 - 12528.167: 72.2886% ( 22) 00:11:13.141 12528.167 - 12580.806: 72.5643% ( 30) 00:11:13.141 12580.806 - 12633.446: 73.1893% ( 68) 00:11:13.141 12633.446 - 12686.085: 73.6673% ( 52) 00:11:13.141 12686.085 - 12738.724: 74.0625% ( 43) 00:11:13.141 12738.724 - 12791.364: 74.3566% ( 32) 00:11:13.141 12791.364 - 12844.003: 74.6507% ( 32) 00:11:13.141 12844.003 - 12896.643: 75.0184% ( 40) 00:11:13.141 12896.643 - 12949.282: 75.4320% ( 45) 00:11:13.141 12949.282 - 13001.921: 75.6893% ( 28) 00:11:13.141 13001.921 - 13054.561: 75.9651% ( 30) 00:11:13.141 13054.561 - 13107.200: 76.2040% ( 26) 00:11:13.141 13107.200 - 13159.839: 76.4154% ( 23) 00:11:13.141 13159.839 - 13212.479: 76.6268% ( 23) 00:11:13.141 13212.479 - 13265.118: 76.8474% ( 24) 00:11:13.141 13265.118 - 13317.757: 77.0221% ( 19) 00:11:13.141 13317.757 - 13370.397: 77.2426% ( 24) 00:11:13.141 13370.397 - 13423.036: 77.5368% ( 32) 00:11:13.141 13423.036 - 13475.676: 77.7022% ( 18) 00:11:13.141 13475.676 - 13580.954: 78.1618% ( 50) 00:11:13.141 13580.954 - 13686.233: 78.3824% ( 24) 00:11:13.141 13686.233 - 13791.512: 78.6949% ( 34) 00:11:13.141 13791.512 - 13896.790: 78.9614% ( 29) 00:11:13.141 13896.790 - 14002.069: 79.2555% ( 32) 00:11:13.141 14002.069 - 14107.348: 79.5772% ( 35) 00:11:13.141 14107.348 - 14212.627: 79.9173% ( 37) 00:11:13.141 14212.627 - 14317.905: 80.4596% ( 59) 00:11:13.141 14317.905 - 14423.184: 81.0386% ( 63) 00:11:13.141 14423.184 - 14528.463: 81.4614% ( 46) 00:11:13.141 14528.463 - 14633.741: 81.9945% ( 58) 00:11:13.141 14633.741 - 14739.020: 82.4724% ( 52) 00:11:13.141 14739.020 - 14844.299: 82.9779% ( 55) 00:11:13.141 14844.299 - 14949.578: 83.3456% ( 40) 00:11:13.141 14949.578 - 15054.856: 83.9982% ( 71) 00:11:13.141 15054.856 - 15160.135: 84.6324% ( 69) 00:11:13.141 15160.135 - 15265.414: 85.3125% ( 74) 00:11:13.141 15265.414 - 15370.692: 86.2684% ( 104) 00:11:13.141 15370.692 - 15475.971: 87.0956% ( 90) 00:11:13.141 15475.971 - 15581.250: 87.9963% ( 98) 00:11:13.141 15581.250 - 15686.529: 88.7592% ( 83) 00:11:13.141 15686.529 - 15791.807: 89.4301% ( 73) 00:11:13.141 15791.807 - 15897.086: 90.0643% ( 69) 00:11:13.141 15897.086 - 16002.365: 90.6985% ( 69) 00:11:13.141 16002.365 - 16107.643: 91.1673% ( 51) 00:11:13.141 16107.643 - 16212.922: 91.5809% ( 45) 00:11:13.141 16212.922 - 16318.201: 91.9118% ( 36) 00:11:13.141 16318.201 - 16423.480: 92.1599% ( 27) 00:11:13.141 16423.480 - 16528.758: 92.3254% ( 18) 00:11:13.141 16528.758 - 16634.037: 92.5460% ( 24) 00:11:13.141 16634.037 - 16739.316: 92.6838% ( 15) 00:11:13.141 16739.316 - 16844.594: 92.7849% ( 11) 00:11:13.141 16844.594 - 16949.873: 92.9136% ( 14) 00:11:13.141 16949.873 - 17055.152: 93.0882% ( 19) 00:11:13.141 17055.152 - 17160.431: 93.1801% ( 10) 00:11:13.141 17160.431 - 17265.709: 93.2629% ( 9) 00:11:13.141 17265.709 - 17370.988: 93.3456% ( 9) 00:11:13.141 17370.988 - 17476.267: 93.4926% ( 16) 00:11:13.141 17476.267 - 17581.545: 93.6673% ( 19) 00:11:13.141 17581.545 - 17686.824: 94.0165% ( 38) 00:11:13.141 17686.824 - 17792.103: 94.3566% ( 37) 00:11:13.141 17792.103 - 17897.382: 94.7702% ( 45) 00:11:13.141 17897.382 - 18002.660: 95.0551% ( 31) 00:11:13.141 18002.660 - 18107.939: 95.3952% ( 37) 00:11:13.141 18107.939 - 18213.218: 95.6066% ( 23) 00:11:13.141 18213.218 - 18318.496: 95.8088% ( 22) 00:11:13.141 18318.496 - 18423.775: 95.9007% ( 10) 00:11:13.141 18423.775 - 18529.054: 96.0110% ( 12) 00:11:13.141 18529.054 - 18634.333: 96.1397% ( 14) 00:11:13.141 18634.333 - 18739.611: 96.3879% ( 27) 00:11:13.141 18739.611 - 18844.890: 96.5717% ( 20) 00:11:13.141 18844.890 - 18950.169: 96.7004% ( 14) 00:11:13.141 18950.169 - 19055.447: 96.9485% ( 27) 00:11:13.141 19055.447 - 19160.726: 97.2702% ( 35) 00:11:13.141 19160.726 - 19266.005: 97.4357% ( 18) 00:11:13.141 19266.005 - 19371.284: 97.5551% ( 13) 00:11:13.141 19371.284 - 19476.562: 97.6654% ( 12) 00:11:13.141 19476.562 - 19581.841: 97.7665% ( 11) 00:11:13.141 19581.841 - 19687.120: 97.8309% ( 7) 00:11:13.141 19687.120 - 19792.398: 97.8676% ( 4) 00:11:13.141 19792.398 - 19897.677: 97.9136% ( 5) 00:11:13.141 19897.677 - 20002.956: 97.9596% ( 5) 00:11:13.141 20002.956 - 20108.235: 98.0147% ( 6) 00:11:13.141 20108.235 - 20213.513: 98.0607% ( 5) 00:11:13.141 20213.513 - 20318.792: 98.0882% ( 3) 00:11:13.141 20318.792 - 20424.071: 98.1250% ( 4) 00:11:13.141 20424.071 - 20529.349: 98.1801% ( 6) 00:11:13.141 20529.349 - 20634.628: 98.2445% ( 7) 00:11:13.141 20634.628 - 20739.907: 98.3272% ( 9) 00:11:13.141 20739.907 - 20845.186: 98.3915% ( 7) 00:11:13.141 20845.186 - 20950.464: 98.5386% ( 16) 00:11:13.141 20950.464 - 21055.743: 98.5846% ( 5) 00:11:13.141 21055.743 - 21161.022: 98.6213% ( 4) 00:11:13.141 21161.022 - 21266.300: 98.6581% ( 4) 00:11:13.142 21266.300 - 21371.579: 98.6765% ( 2) 00:11:13.142 21371.579 - 21476.858: 98.7132% ( 4) 00:11:13.142 21687.415 - 21792.694: 98.7408% ( 3) 00:11:13.142 21792.694 - 21897.973: 98.7684% ( 3) 00:11:13.142 21897.973 - 22003.251: 98.8051% ( 4) 00:11:13.142 22003.251 - 22108.530: 98.8235% ( 2) 00:11:13.142 31373.057 - 31583.614: 98.8511% ( 3) 00:11:13.142 31583.614 - 31794.172: 98.8971% ( 5) 00:11:13.142 31794.172 - 32004.729: 98.9430% ( 5) 00:11:13.142 32004.729 - 32215.287: 98.9798% ( 4) 00:11:13.142 32215.287 - 32425.844: 99.0165% ( 4) 00:11:13.142 32425.844 - 32636.402: 99.0533% ( 4) 00:11:13.142 32636.402 - 32846.959: 99.0993% ( 5) 00:11:13.142 32846.959 - 33057.516: 99.1268% ( 3) 00:11:13.142 33057.516 - 33268.074: 99.1820% ( 6) 00:11:13.142 33268.074 - 33478.631: 99.2188% ( 4) 00:11:13.142 33478.631 - 33689.189: 99.2463% ( 3) 00:11:13.142 33689.189 - 33899.746: 99.2923% ( 5) 00:11:13.142 33899.746 - 34110.304: 99.3382% ( 5) 00:11:13.142 34110.304 - 34320.861: 99.3750% ( 4) 00:11:13.142 34320.861 - 34531.418: 99.4118% ( 4) 00:11:13.142 43585.388 - 43795.945: 99.4485% ( 4) 00:11:13.142 43795.945 - 44006.503: 99.4945% ( 5) 00:11:13.142 44006.503 - 44217.060: 99.5312% ( 4) 00:11:13.142 44217.060 - 44427.618: 99.5680% ( 4) 00:11:13.142 44427.618 - 44638.175: 99.6140% ( 5) 00:11:13.142 44638.175 - 44848.733: 99.6599% ( 5) 00:11:13.142 44848.733 - 45059.290: 99.6967% ( 4) 00:11:13.142 45059.290 - 45269.847: 99.7426% ( 5) 00:11:13.142 45269.847 - 45480.405: 99.7886% ( 5) 00:11:13.142 45480.405 - 45690.962: 99.8346% ( 5) 00:11:13.142 45690.962 - 45901.520: 99.8621% ( 3) 00:11:13.142 45901.520 - 46112.077: 99.9081% ( 5) 00:11:13.142 46112.077 - 46322.635: 99.9540% ( 5) 00:11:13.142 46322.635 - 46533.192: 99.9908% ( 4) 00:11:13.142 46533.192 - 46743.749: 100.0000% ( 1) 00:11:13.142 00:11:13.142 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:13.142 ============================================================================== 00:11:13.142 Range in us Cumulative IO count 00:11:13.142 8527.576 - 8580.215: 0.0092% ( 1) 00:11:13.142 8580.215 - 8632.855: 0.0460% ( 4) 00:11:13.142 8632.855 - 8685.494: 0.1654% ( 13) 00:11:13.142 8685.494 - 8738.133: 0.2574% ( 10) 00:11:13.142 8738.133 - 8790.773: 0.3952% ( 15) 00:11:13.142 8790.773 - 8843.412: 0.6250% ( 25) 00:11:13.142 8843.412 - 8896.051: 0.8732% ( 27) 00:11:13.142 8896.051 - 8948.691: 1.4246% ( 60) 00:11:13.142 8948.691 - 9001.330: 2.2702% ( 92) 00:11:13.142 9001.330 - 9053.969: 3.0147% ( 81) 00:11:13.142 9053.969 - 9106.609: 3.7316% ( 78) 00:11:13.142 9106.609 - 9159.248: 4.7978% ( 116) 00:11:13.142 9159.248 - 9211.888: 6.2132% ( 154) 00:11:13.142 9211.888 - 9264.527: 7.9228% ( 186) 00:11:13.142 9264.527 - 9317.166: 10.0092% ( 227) 00:11:13.142 9317.166 - 9369.806: 12.3805% ( 258) 00:11:13.142 9369.806 - 9422.445: 14.5680% ( 238) 00:11:13.142 9422.445 - 9475.084: 17.0404% ( 269) 00:11:13.142 9475.084 - 9527.724: 19.7702% ( 297) 00:11:13.142 9527.724 - 9580.363: 22.4724% ( 294) 00:11:13.142 9580.363 - 9633.002: 25.4044% ( 319) 00:11:13.142 9633.002 - 9685.642: 28.1710% ( 301) 00:11:13.142 9685.642 - 9738.281: 30.7261% ( 278) 00:11:13.142 9738.281 - 9790.920: 33.4835% ( 300) 00:11:13.142 9790.920 - 9843.560: 36.0662% ( 281) 00:11:13.142 9843.560 - 9896.199: 38.2077% ( 233) 00:11:13.142 9896.199 - 9948.839: 40.1838% ( 215) 00:11:13.142 9948.839 - 10001.478: 41.9761% ( 195) 00:11:13.142 10001.478 - 10054.117: 43.8971% ( 209) 00:11:13.142 10054.117 - 10106.757: 45.7537% ( 202) 00:11:13.142 10106.757 - 10159.396: 47.7298% ( 215) 00:11:13.142 10159.396 - 10212.035: 49.5956% ( 203) 00:11:13.142 10212.035 - 10264.675: 51.3879% ( 195) 00:11:13.142 10264.675 - 10317.314: 52.6654% ( 139) 00:11:13.142 10317.314 - 10369.953: 53.9430% ( 139) 00:11:13.142 10369.953 - 10422.593: 54.9632% ( 111) 00:11:13.142 10422.593 - 10475.232: 55.9191% ( 104) 00:11:13.142 10475.232 - 10527.871: 56.8199% ( 98) 00:11:13.142 10527.871 - 10580.511: 57.7022% ( 96) 00:11:13.142 10580.511 - 10633.150: 58.7040% ( 109) 00:11:13.142 10633.150 - 10685.790: 59.5221% ( 89) 00:11:13.142 10685.790 - 10738.429: 60.1103% ( 64) 00:11:13.142 10738.429 - 10791.068: 60.6342% ( 57) 00:11:13.142 10791.068 - 10843.708: 61.1673% ( 58) 00:11:13.142 10843.708 - 10896.347: 61.6728% ( 55) 00:11:13.142 10896.347 - 10948.986: 62.2702% ( 65) 00:11:13.142 10948.986 - 11001.626: 62.7941% ( 57) 00:11:13.142 11001.626 - 11054.265: 63.2812% ( 53) 00:11:13.142 11054.265 - 11106.904: 63.7684% ( 53) 00:11:13.142 11106.904 - 11159.544: 64.2096% ( 48) 00:11:13.142 11159.544 - 11212.183: 64.4853% ( 30) 00:11:13.142 11212.183 - 11264.822: 64.7702% ( 31) 00:11:13.142 11264.822 - 11317.462: 65.1195% ( 38) 00:11:13.142 11317.462 - 11370.101: 65.5699% ( 49) 00:11:13.142 11370.101 - 11422.741: 65.9191% ( 38) 00:11:13.142 11422.741 - 11475.380: 66.2224% ( 33) 00:11:13.142 11475.380 - 11528.019: 66.5993% ( 41) 00:11:13.142 11528.019 - 11580.659: 66.8015% ( 22) 00:11:13.142 11580.659 - 11633.298: 66.9761% ( 19) 00:11:13.142 11633.298 - 11685.937: 67.1415% ( 18) 00:11:13.142 11685.937 - 11738.577: 67.3346% ( 21) 00:11:13.142 11738.577 - 11791.216: 67.4816% ( 16) 00:11:13.142 11791.216 - 11843.855: 67.6838% ( 22) 00:11:13.142 11843.855 - 11896.495: 67.8768% ( 21) 00:11:13.142 11896.495 - 11949.134: 68.1342% ( 28) 00:11:13.142 11949.134 - 12001.773: 68.4743% ( 37) 00:11:13.142 12001.773 - 12054.413: 68.8143% ( 37) 00:11:13.142 12054.413 - 12107.052: 69.2279% ( 45) 00:11:13.142 12107.052 - 12159.692: 69.9357% ( 77) 00:11:13.142 12159.692 - 12212.331: 70.3493% ( 45) 00:11:13.142 12212.331 - 12264.970: 70.6526% ( 33) 00:11:13.142 12264.970 - 12317.610: 71.0294% ( 41) 00:11:13.142 12317.610 - 12370.249: 71.3695% ( 37) 00:11:13.142 12370.249 - 12422.888: 71.7831% ( 45) 00:11:13.142 12422.888 - 12475.528: 72.2243% ( 48) 00:11:13.142 12475.528 - 12528.167: 72.5827% ( 39) 00:11:13.142 12528.167 - 12580.806: 72.9504% ( 40) 00:11:13.142 12580.806 - 12633.446: 73.2445% ( 32) 00:11:13.142 12633.446 - 12686.085: 73.7684% ( 57) 00:11:13.142 12686.085 - 12738.724: 74.1176% ( 38) 00:11:13.142 12738.724 - 12791.364: 74.4301% ( 34) 00:11:13.142 12791.364 - 12844.003: 74.7978% ( 40) 00:11:13.142 12844.003 - 12896.643: 75.2941% ( 54) 00:11:13.142 12896.643 - 12949.282: 75.7721% ( 52) 00:11:13.142 12949.282 - 13001.921: 76.1581% ( 42) 00:11:13.142 13001.921 - 13054.561: 76.4890% ( 36) 00:11:13.142 13054.561 - 13107.200: 76.8382% ( 38) 00:11:13.142 13107.200 - 13159.839: 77.0221% ( 20) 00:11:13.142 13159.839 - 13212.479: 77.1783% ( 17) 00:11:13.142 13212.479 - 13265.118: 77.3070% ( 14) 00:11:13.142 13265.118 - 13317.757: 77.4632% ( 17) 00:11:13.142 13317.757 - 13370.397: 77.6011% ( 15) 00:11:13.142 13370.397 - 13423.036: 77.7482% ( 16) 00:11:13.142 13423.036 - 13475.676: 77.9228% ( 19) 00:11:13.142 13475.676 - 13580.954: 78.1985% ( 30) 00:11:13.142 13580.954 - 13686.233: 78.4375% ( 26) 00:11:13.142 13686.233 - 13791.512: 78.6305% ( 21) 00:11:13.142 13791.512 - 13896.790: 78.8879% ( 28) 00:11:13.142 13896.790 - 14002.069: 79.3566% ( 51) 00:11:13.142 14002.069 - 14107.348: 79.6967% ( 37) 00:11:13.142 14107.348 - 14212.627: 79.9724% ( 30) 00:11:13.142 14212.627 - 14317.905: 80.3309% ( 39) 00:11:13.142 14317.905 - 14423.184: 80.8915% ( 61) 00:11:13.142 14423.184 - 14528.463: 81.3419% ( 49) 00:11:13.142 14528.463 - 14633.741: 81.8199% ( 52) 00:11:13.142 14633.741 - 14739.020: 82.4081% ( 64) 00:11:13.142 14739.020 - 14844.299: 83.2721% ( 94) 00:11:13.142 14844.299 - 14949.578: 84.5129% ( 135) 00:11:13.142 14949.578 - 15054.856: 85.6618% ( 125) 00:11:13.142 15054.856 - 15160.135: 86.2224% ( 61) 00:11:13.142 15160.135 - 15265.414: 86.5901% ( 40) 00:11:13.142 15265.414 - 15370.692: 86.9393% ( 38) 00:11:13.142 15370.692 - 15475.971: 87.4449% ( 55) 00:11:13.142 15475.971 - 15581.250: 87.9688% ( 57) 00:11:13.142 15581.250 - 15686.529: 88.5110% ( 59) 00:11:13.142 15686.529 - 15791.807: 89.3107% ( 87) 00:11:13.142 15791.807 - 15897.086: 90.0276% ( 78) 00:11:13.142 15897.086 - 16002.365: 90.5515% ( 57) 00:11:13.142 16002.365 - 16107.643: 91.0478% ( 54) 00:11:13.142 16107.643 - 16212.922: 91.4338% ( 42) 00:11:13.142 16212.922 - 16318.201: 91.8658% ( 47) 00:11:13.142 16318.201 - 16423.480: 92.3438% ( 52) 00:11:13.142 16423.480 - 16528.758: 92.5368% ( 21) 00:11:13.142 16528.758 - 16634.037: 92.6930% ( 17) 00:11:13.142 16634.037 - 16739.316: 92.8125% ( 13) 00:11:13.142 16739.316 - 16844.594: 92.8768% ( 7) 00:11:13.142 16844.594 - 16949.873: 92.9228% ( 5) 00:11:13.142 16949.873 - 17055.152: 92.9504% ( 3) 00:11:13.142 17055.152 - 17160.431: 93.0331% ( 9) 00:11:13.142 17160.431 - 17265.709: 93.0882% ( 6) 00:11:13.142 17265.709 - 17370.988: 93.2537% ( 18) 00:11:13.142 17370.988 - 17476.267: 93.3824% ( 14) 00:11:13.142 17476.267 - 17581.545: 93.6029% ( 24) 00:11:13.142 17581.545 - 17686.824: 93.7868% ( 20) 00:11:13.142 17686.824 - 17792.103: 93.8879% ( 11) 00:11:13.142 17792.103 - 17897.382: 94.2188% ( 36) 00:11:13.142 17897.382 - 18002.660: 94.5864% ( 40) 00:11:13.142 18002.660 - 18107.939: 94.7794% ( 21) 00:11:13.142 18107.939 - 18213.218: 95.0460% ( 29) 00:11:13.142 18213.218 - 18318.496: 95.4779% ( 47) 00:11:13.142 18318.496 - 18423.775: 95.8272% ( 38) 00:11:13.142 18423.775 - 18529.054: 96.0754% ( 27) 00:11:13.142 18529.054 - 18634.333: 96.3327% ( 28) 00:11:13.142 18634.333 - 18739.611: 96.6360% ( 33) 00:11:13.142 18739.611 - 18844.890: 96.8290% ( 21) 00:11:13.142 18844.890 - 18950.169: 97.0956% ( 29) 00:11:13.142 18950.169 - 19055.447: 97.3621% ( 29) 00:11:13.142 19055.447 - 19160.726: 97.6471% ( 31) 00:11:13.142 19160.726 - 19266.005: 97.7941% ( 16) 00:11:13.142 19266.005 - 19371.284: 97.8952% ( 11) 00:11:13.142 19371.284 - 19476.562: 97.9596% ( 7) 00:11:13.142 19476.562 - 19581.841: 98.0055% ( 5) 00:11:13.142 19581.841 - 19687.120: 98.0331% ( 3) 00:11:13.142 19687.120 - 19792.398: 98.0699% ( 4) 00:11:13.142 19792.398 - 19897.677: 98.1066% ( 4) 00:11:13.142 19897.677 - 20002.956: 98.1434% ( 4) 00:11:13.142 20002.956 - 20108.235: 98.1710% ( 3) 00:11:13.143 20108.235 - 20213.513: 98.2077% ( 4) 00:11:13.143 20213.513 - 20318.792: 98.2353% ( 3) 00:11:13.143 20529.349 - 20634.628: 98.2721% ( 4) 00:11:13.143 20634.628 - 20739.907: 98.3456% ( 8) 00:11:13.143 20739.907 - 20845.186: 98.3548% ( 1) 00:11:13.143 20845.186 - 20950.464: 98.3915% ( 4) 00:11:13.143 20950.464 - 21055.743: 98.4375% ( 5) 00:11:13.143 21055.743 - 21161.022: 98.4743% ( 4) 00:11:13.143 21161.022 - 21266.300: 98.5110% ( 4) 00:11:13.143 21266.300 - 21371.579: 98.5570% ( 5) 00:11:13.143 21371.579 - 21476.858: 98.5938% ( 4) 00:11:13.143 21476.858 - 21582.137: 98.6305% ( 4) 00:11:13.143 21582.137 - 21687.415: 98.6673% ( 4) 00:11:13.143 21687.415 - 21792.694: 98.7040% ( 4) 00:11:13.143 21792.694 - 21897.973: 98.7500% ( 5) 00:11:13.143 21897.973 - 22003.251: 98.7868% ( 4) 00:11:13.143 22003.251 - 22108.530: 98.8235% ( 4) 00:11:13.143 31373.057 - 31583.614: 98.8511% ( 3) 00:11:13.143 31583.614 - 31794.172: 98.9062% ( 6) 00:11:13.143 31794.172 - 32004.729: 98.9522% ( 5) 00:11:13.143 32004.729 - 32215.287: 98.9890% ( 4) 00:11:13.143 32215.287 - 32425.844: 99.0349% ( 5) 00:11:13.143 32425.844 - 32636.402: 99.0809% ( 5) 00:11:13.143 32636.402 - 32846.959: 99.1360% ( 6) 00:11:13.143 32846.959 - 33057.516: 99.1820% ( 5) 00:11:13.143 33057.516 - 33268.074: 99.2188% ( 4) 00:11:13.143 33268.074 - 33478.631: 99.2647% ( 5) 00:11:13.143 33478.631 - 33689.189: 99.3107% ( 5) 00:11:13.143 33689.189 - 33899.746: 99.3566% ( 5) 00:11:13.143 33899.746 - 34110.304: 99.4026% ( 5) 00:11:13.143 34110.304 - 34320.861: 99.4118% ( 1) 00:11:13.143 42322.043 - 42532.601: 99.4210% ( 1) 00:11:13.143 42532.601 - 42743.158: 99.4669% ( 5) 00:11:13.143 42743.158 - 42953.716: 99.5129% ( 5) 00:11:13.143 42953.716 - 43164.273: 99.5588% ( 5) 00:11:13.143 43164.273 - 43374.831: 99.5956% ( 4) 00:11:13.143 43374.831 - 43585.388: 99.6507% ( 6) 00:11:13.143 43585.388 - 43795.945: 99.6967% ( 5) 00:11:13.143 43795.945 - 44006.503: 99.7518% ( 6) 00:11:13.143 44006.503 - 44217.060: 99.7978% ( 5) 00:11:13.143 44217.060 - 44427.618: 99.8438% ( 5) 00:11:13.143 44427.618 - 44638.175: 99.8897% ( 5) 00:11:13.143 44638.175 - 44848.733: 99.9357% ( 5) 00:11:13.143 44848.733 - 45059.290: 99.9816% ( 5) 00:11:13.143 45059.290 - 45269.847: 100.0000% ( 2) 00:11:13.143 00:11:13.143 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:13.143 ============================================================================== 00:11:13.143 Range in us Cumulative IO count 00:11:13.143 8159.100 - 8211.740: 0.0092% ( 1) 00:11:13.143 8369.658 - 8422.297: 0.0184% ( 1) 00:11:13.143 8422.297 - 8474.937: 0.0551% ( 4) 00:11:13.143 8474.937 - 8527.576: 0.1011% ( 5) 00:11:13.143 8527.576 - 8580.215: 0.1471% ( 5) 00:11:13.143 8580.215 - 8632.855: 0.2206% ( 8) 00:11:13.143 8632.855 - 8685.494: 0.3125% ( 10) 00:11:13.143 8685.494 - 8738.133: 0.4963% ( 20) 00:11:13.143 8738.133 - 8790.773: 0.6801% ( 20) 00:11:13.143 8790.773 - 8843.412: 0.9559% ( 30) 00:11:13.143 8843.412 - 8896.051: 1.3787% ( 46) 00:11:13.143 8896.051 - 8948.691: 1.8842% ( 55) 00:11:13.143 8948.691 - 9001.330: 2.2518% ( 40) 00:11:13.143 9001.330 - 9053.969: 2.8309% ( 63) 00:11:13.143 9053.969 - 9106.609: 3.4651% ( 69) 00:11:13.143 9106.609 - 9159.248: 4.3750% ( 99) 00:11:13.143 9159.248 - 9211.888: 5.7812% ( 153) 00:11:13.143 9211.888 - 9264.527: 7.2702% ( 162) 00:11:13.143 9264.527 - 9317.166: 8.7684% ( 163) 00:11:13.143 9317.166 - 9369.806: 11.1397% ( 258) 00:11:13.143 9369.806 - 9422.445: 13.4283% ( 249) 00:11:13.143 9422.445 - 9475.084: 16.1029% ( 291) 00:11:13.143 9475.084 - 9527.724: 18.9338% ( 308) 00:11:13.143 9527.724 - 9580.363: 22.0772% ( 342) 00:11:13.143 9580.363 - 9633.002: 25.3585% ( 357) 00:11:13.143 9633.002 - 9685.642: 28.1985% ( 309) 00:11:13.143 9685.642 - 9738.281: 31.1029% ( 316) 00:11:13.143 9738.281 - 9790.920: 33.9706% ( 312) 00:11:13.143 9790.920 - 9843.560: 36.7647% ( 304) 00:11:13.143 9843.560 - 9896.199: 39.4026% ( 287) 00:11:13.143 9896.199 - 9948.839: 41.3327% ( 210) 00:11:13.143 9948.839 - 10001.478: 42.5919% ( 137) 00:11:13.143 10001.478 - 10054.117: 44.2004% ( 175) 00:11:13.143 10054.117 - 10106.757: 45.4779% ( 139) 00:11:13.143 10106.757 - 10159.396: 47.2426% ( 192) 00:11:13.143 10159.396 - 10212.035: 48.7224% ( 161) 00:11:13.143 10212.035 - 10264.675: 50.5515% ( 199) 00:11:13.143 10264.675 - 10317.314: 52.3989% ( 201) 00:11:13.143 10317.314 - 10369.953: 53.7500% ( 147) 00:11:13.143 10369.953 - 10422.593: 55.2206% ( 160) 00:11:13.143 10422.593 - 10475.232: 56.2592% ( 113) 00:11:13.143 10475.232 - 10527.871: 57.3070% ( 114) 00:11:13.143 10527.871 - 10580.511: 58.1342% ( 90) 00:11:13.143 10580.511 - 10633.150: 58.9062% ( 84) 00:11:13.143 10633.150 - 10685.790: 59.6415% ( 80) 00:11:13.143 10685.790 - 10738.429: 60.4871% ( 92) 00:11:13.143 10738.429 - 10791.068: 61.0846% ( 65) 00:11:13.143 10791.068 - 10843.708: 61.7371% ( 71) 00:11:13.143 10843.708 - 10896.347: 62.4173% ( 74) 00:11:13.143 10896.347 - 10948.986: 62.7665% ( 38) 00:11:13.143 10948.986 - 11001.626: 63.0974% ( 36) 00:11:13.143 11001.626 - 11054.265: 63.5110% ( 45) 00:11:13.143 11054.265 - 11106.904: 64.0441% ( 58) 00:11:13.143 11106.904 - 11159.544: 64.4485% ( 44) 00:11:13.143 11159.544 - 11212.183: 64.8162% ( 40) 00:11:13.143 11212.183 - 11264.822: 65.3952% ( 63) 00:11:13.143 11264.822 - 11317.462: 65.8732% ( 52) 00:11:13.143 11317.462 - 11370.101: 66.1489% ( 30) 00:11:13.143 11370.101 - 11422.741: 66.3971% ( 27) 00:11:13.143 11422.741 - 11475.380: 66.6452% ( 27) 00:11:13.143 11475.380 - 11528.019: 66.8750% ( 25) 00:11:13.143 11528.019 - 11580.659: 67.1048% ( 25) 00:11:13.143 11580.659 - 11633.298: 67.3162% ( 23) 00:11:13.143 11633.298 - 11685.937: 67.4816% ( 18) 00:11:13.143 11685.937 - 11738.577: 67.7022% ( 24) 00:11:13.143 11738.577 - 11791.216: 67.8493% ( 16) 00:11:13.143 11791.216 - 11843.855: 67.9779% ( 14) 00:11:13.143 11843.855 - 11896.495: 68.1526% ( 19) 00:11:13.143 11896.495 - 11949.134: 68.3272% ( 19) 00:11:13.143 11949.134 - 12001.773: 68.5386% ( 23) 00:11:13.143 12001.773 - 12054.413: 68.8603% ( 35) 00:11:13.143 12054.413 - 12107.052: 69.1820% ( 35) 00:11:13.143 12107.052 - 12159.692: 69.5312% ( 38) 00:11:13.143 12159.692 - 12212.331: 69.9540% ( 46) 00:11:13.143 12212.331 - 12264.970: 70.2298% ( 30) 00:11:13.143 12264.970 - 12317.610: 70.4871% ( 28) 00:11:13.143 12317.610 - 12370.249: 70.8640% ( 41) 00:11:13.143 12370.249 - 12422.888: 71.3051% ( 48) 00:11:13.143 12422.888 - 12475.528: 71.7096% ( 44) 00:11:13.143 12475.528 - 12528.167: 72.0588% ( 38) 00:11:13.143 12528.167 - 12580.806: 72.4724% ( 45) 00:11:13.143 12580.806 - 12633.446: 72.9136% ( 48) 00:11:13.143 12633.446 - 12686.085: 73.4375% ( 57) 00:11:13.143 12686.085 - 12738.724: 74.0257% ( 64) 00:11:13.143 12738.724 - 12791.364: 74.4485% ( 46) 00:11:13.143 12791.364 - 12844.003: 74.8713% ( 46) 00:11:13.143 12844.003 - 12896.643: 75.2849% ( 45) 00:11:13.143 12896.643 - 12949.282: 75.5882% ( 33) 00:11:13.143 12949.282 - 13001.921: 75.9099% ( 35) 00:11:13.143 13001.921 - 13054.561: 76.2132% ( 33) 00:11:13.143 13054.561 - 13107.200: 76.3511% ( 15) 00:11:13.143 13107.200 - 13159.839: 76.4706% ( 13) 00:11:13.143 13159.839 - 13212.479: 76.6176% ( 16) 00:11:13.143 13212.479 - 13265.118: 76.7188% ( 11) 00:11:13.143 13265.118 - 13317.757: 76.8934% ( 19) 00:11:13.143 13317.757 - 13370.397: 77.0864% ( 21) 00:11:13.143 13370.397 - 13423.036: 77.2702% ( 20) 00:11:13.143 13423.036 - 13475.676: 77.4540% ( 20) 00:11:13.143 13475.676 - 13580.954: 78.0331% ( 63) 00:11:13.143 13580.954 - 13686.233: 78.3732% ( 37) 00:11:13.143 13686.233 - 13791.512: 78.6213% ( 27) 00:11:13.143 13791.512 - 13896.790: 78.9338% ( 34) 00:11:13.143 13896.790 - 14002.069: 79.3750% ( 48) 00:11:13.143 14002.069 - 14107.348: 80.0000% ( 68) 00:11:13.143 14107.348 - 14212.627: 80.5147% ( 56) 00:11:13.143 14212.627 - 14317.905: 81.0478% ( 58) 00:11:13.143 14317.905 - 14423.184: 81.4890% ( 48) 00:11:13.143 14423.184 - 14528.463: 81.9485% ( 50) 00:11:13.143 14528.463 - 14633.741: 82.4908% ( 59) 00:11:13.143 14633.741 - 14739.020: 83.5294% ( 113) 00:11:13.143 14739.020 - 14844.299: 84.5680% ( 113) 00:11:13.143 14844.299 - 14949.578: 85.3676% ( 87) 00:11:13.143 14949.578 - 15054.856: 85.7812% ( 45) 00:11:13.143 15054.856 - 15160.135: 86.2960% ( 56) 00:11:13.143 15160.135 - 15265.414: 86.6912% ( 43) 00:11:13.143 15265.414 - 15370.692: 87.2243% ( 58) 00:11:13.143 15370.692 - 15475.971: 87.9871% ( 83) 00:11:13.143 15475.971 - 15581.250: 88.8971% ( 99) 00:11:13.143 15581.250 - 15686.529: 89.6691% ( 84) 00:11:13.143 15686.529 - 15791.807: 90.1746% ( 55) 00:11:13.143 15791.807 - 15897.086: 90.5882% ( 45) 00:11:13.143 15897.086 - 16002.365: 90.9283% ( 37) 00:11:13.143 16002.365 - 16107.643: 91.3143% ( 42) 00:11:13.143 16107.643 - 16212.922: 91.7739% ( 50) 00:11:13.143 16212.922 - 16318.201: 92.2886% ( 56) 00:11:13.143 16318.201 - 16423.480: 92.4632% ( 19) 00:11:13.143 16423.480 - 16528.758: 92.5919% ( 14) 00:11:13.143 16528.758 - 16634.037: 92.7114% ( 13) 00:11:13.143 16634.037 - 16739.316: 92.8401% ( 14) 00:11:13.143 16739.316 - 16844.594: 92.9596% ( 13) 00:11:13.143 16844.594 - 16949.873: 93.0790% ( 13) 00:11:13.143 16949.873 - 17055.152: 93.1250% ( 5) 00:11:13.143 17055.152 - 17160.431: 93.3088% ( 20) 00:11:13.143 17160.431 - 17265.709: 93.5018% ( 21) 00:11:13.143 17265.709 - 17370.988: 93.7960% ( 32) 00:11:13.143 17370.988 - 17476.267: 94.0625% ( 29) 00:11:13.143 17476.267 - 17581.545: 94.4393% ( 41) 00:11:13.143 17581.545 - 17686.824: 94.6324% ( 21) 00:11:13.143 17686.824 - 17792.103: 94.8529% ( 24) 00:11:13.143 17792.103 - 17897.382: 95.0368% ( 20) 00:11:13.143 17897.382 - 18002.660: 95.1746% ( 15) 00:11:13.143 18002.660 - 18107.939: 95.2941% ( 13) 00:11:13.143 18107.939 - 18213.218: 95.3676% ( 8) 00:11:13.143 18213.218 - 18318.496: 95.4136% ( 5) 00:11:13.143 18318.496 - 18423.775: 95.4688% ( 6) 00:11:13.143 18423.775 - 18529.054: 95.5790% ( 12) 00:11:13.143 18529.054 - 18634.333: 95.8364% ( 28) 00:11:13.143 18634.333 - 18739.611: 96.1949% ( 39) 00:11:13.144 18739.611 - 18844.890: 96.2960% ( 11) 00:11:13.144 18844.890 - 18950.169: 96.4062% ( 12) 00:11:13.144 18950.169 - 19055.447: 96.4982% ( 10) 00:11:13.144 19055.447 - 19160.726: 96.6176% ( 13) 00:11:13.144 19160.726 - 19266.005: 96.7188% ( 11) 00:11:13.144 19266.005 - 19371.284: 96.7923% ( 8) 00:11:13.144 19371.284 - 19476.562: 96.9026% ( 12) 00:11:13.144 19476.562 - 19581.841: 97.0404% ( 15) 00:11:13.144 19581.841 - 19687.120: 97.2059% ( 18) 00:11:13.144 19687.120 - 19792.398: 97.4081% ( 22) 00:11:13.144 19792.398 - 19897.677: 97.7390% ( 36) 00:11:13.144 19897.677 - 20002.956: 98.0974% ( 39) 00:11:13.144 20002.956 - 20108.235: 98.3456% ( 27) 00:11:13.144 20108.235 - 20213.513: 98.4651% ( 13) 00:11:13.144 20213.513 - 20318.792: 98.5018% ( 4) 00:11:13.144 20318.792 - 20424.071: 98.5478% ( 5) 00:11:13.144 20424.071 - 20529.349: 98.5846% ( 4) 00:11:13.144 20529.349 - 20634.628: 98.6213% ( 4) 00:11:13.144 20634.628 - 20739.907: 98.6673% ( 5) 00:11:13.144 20739.907 - 20845.186: 98.7040% ( 4) 00:11:13.144 20845.186 - 20950.464: 98.7500% ( 5) 00:11:13.144 20950.464 - 21055.743: 98.7868% ( 4) 00:11:13.144 21055.743 - 21161.022: 98.8235% ( 4) 00:11:13.144 31162.500 - 31373.057: 98.8327% ( 1) 00:11:13.144 31373.057 - 31583.614: 98.9982% ( 18) 00:11:13.144 31583.614 - 31794.172: 99.0809% ( 9) 00:11:13.144 31794.172 - 32004.729: 99.1176% ( 4) 00:11:13.144 32004.729 - 32215.287: 99.1452% ( 3) 00:11:13.144 32215.287 - 32425.844: 99.1728% ( 3) 00:11:13.144 32425.844 - 32636.402: 99.2188% ( 5) 00:11:13.144 32636.402 - 32846.959: 99.2647% ( 5) 00:11:13.144 32846.959 - 33057.516: 99.3199% ( 6) 00:11:13.144 33057.516 - 33268.074: 99.3658% ( 5) 00:11:13.144 33268.074 - 33478.631: 99.4118% ( 5) 00:11:13.144 41900.929 - 42111.486: 99.4577% ( 5) 00:11:13.144 42111.486 - 42322.043: 99.5037% ( 5) 00:11:13.144 42322.043 - 42532.601: 99.5404% ( 4) 00:11:13.144 42532.601 - 42743.158: 99.5864% ( 5) 00:11:13.144 42743.158 - 42953.716: 99.6415% ( 6) 00:11:13.144 42953.716 - 43164.273: 99.6875% ( 5) 00:11:13.144 43164.273 - 43374.831: 99.7335% ( 5) 00:11:13.144 43374.831 - 43585.388: 99.7886% ( 6) 00:11:13.144 43585.388 - 43795.945: 99.8346% ( 5) 00:11:13.144 43795.945 - 44006.503: 99.8713% ( 4) 00:11:13.144 44006.503 - 44217.060: 99.9265% ( 6) 00:11:13.144 44217.060 - 44427.618: 99.9632% ( 4) 00:11:13.144 44427.618 - 44638.175: 100.0000% ( 4) 00:11:13.144 00:11:13.144 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:13.144 ============================================================================== 00:11:13.144 Range in us Cumulative IO count 00:11:13.144 8527.576 - 8580.215: 0.0368% ( 4) 00:11:13.144 8580.215 - 8632.855: 0.0643% ( 3) 00:11:13.144 8632.855 - 8685.494: 0.1471% ( 9) 00:11:13.144 8685.494 - 8738.133: 0.2114% ( 7) 00:11:13.144 8738.133 - 8790.773: 0.4504% ( 26) 00:11:13.144 8790.773 - 8843.412: 0.6618% ( 23) 00:11:13.144 8843.412 - 8896.051: 0.9099% ( 27) 00:11:13.144 8896.051 - 8948.691: 1.3051% ( 43) 00:11:13.144 8948.691 - 9001.330: 1.9026% ( 65) 00:11:13.144 9001.330 - 9053.969: 2.3989% ( 54) 00:11:13.144 9053.969 - 9106.609: 3.0331% ( 69) 00:11:13.144 9106.609 - 9159.248: 4.0441% ( 110) 00:11:13.144 9159.248 - 9211.888: 5.0827% ( 113) 00:11:13.144 9211.888 - 9264.527: 6.6544% ( 171) 00:11:13.144 9264.527 - 9317.166: 8.5018% ( 201) 00:11:13.144 9317.166 - 9369.806: 10.9559% ( 267) 00:11:13.144 9369.806 - 9422.445: 13.7408% ( 303) 00:11:13.144 9422.445 - 9475.084: 16.6268% ( 314) 00:11:13.144 9475.084 - 9527.724: 19.2463% ( 285) 00:11:13.144 9527.724 - 9580.363: 22.2794% ( 330) 00:11:13.144 9580.363 - 9633.002: 25.5055% ( 351) 00:11:13.144 9633.002 - 9685.642: 28.5294% ( 329) 00:11:13.144 9685.642 - 9738.281: 31.5993% ( 334) 00:11:13.144 9738.281 - 9790.920: 34.2739% ( 291) 00:11:13.144 9790.920 - 9843.560: 36.7279% ( 267) 00:11:13.144 9843.560 - 9896.199: 39.1728% ( 266) 00:11:13.144 9896.199 - 9948.839: 41.3419% ( 236) 00:11:13.144 9948.839 - 10001.478: 43.1434% ( 196) 00:11:13.144 10001.478 - 10054.117: 44.8438% ( 185) 00:11:13.144 10054.117 - 10106.757: 46.2776% ( 156) 00:11:13.144 10106.757 - 10159.396: 48.0331% ( 191) 00:11:13.144 10159.396 - 10212.035: 49.4393% ( 153) 00:11:13.144 10212.035 - 10264.675: 50.9559% ( 165) 00:11:13.144 10264.675 - 10317.314: 52.4816% ( 166) 00:11:13.144 10317.314 - 10369.953: 53.9246% ( 157) 00:11:13.144 10369.953 - 10422.593: 54.9632% ( 113) 00:11:13.144 10422.593 - 10475.232: 56.1121% ( 125) 00:11:13.144 10475.232 - 10527.871: 57.1415% ( 112) 00:11:13.144 10527.871 - 10580.511: 58.4283% ( 140) 00:11:13.144 10580.511 - 10633.150: 59.5496% ( 122) 00:11:13.144 10633.150 - 10685.790: 60.4136% ( 94) 00:11:13.144 10685.790 - 10738.429: 61.1857% ( 84) 00:11:13.144 10738.429 - 10791.068: 61.8750% ( 75) 00:11:13.144 10791.068 - 10843.708: 62.3621% ( 53) 00:11:13.144 10843.708 - 10896.347: 62.9688% ( 66) 00:11:13.144 10896.347 - 10948.986: 63.4467% ( 52) 00:11:13.144 10948.986 - 11001.626: 63.8419% ( 43) 00:11:13.144 11001.626 - 11054.265: 64.2096% ( 40) 00:11:13.144 11054.265 - 11106.904: 64.4669% ( 28) 00:11:13.144 11106.904 - 11159.544: 64.7335% ( 29) 00:11:13.144 11159.544 - 11212.183: 64.9449% ( 23) 00:11:13.144 11212.183 - 11264.822: 65.1195% ( 19) 00:11:13.144 11264.822 - 11317.462: 65.3125% ( 21) 00:11:13.144 11317.462 - 11370.101: 65.5147% ( 22) 00:11:13.144 11370.101 - 11422.741: 65.7445% ( 25) 00:11:13.144 11422.741 - 11475.380: 65.9651% ( 24) 00:11:13.144 11475.380 - 11528.019: 66.2040% ( 26) 00:11:13.144 11528.019 - 11580.659: 66.3879% ( 20) 00:11:13.144 11580.659 - 11633.298: 66.6268% ( 26) 00:11:13.144 11633.298 - 11685.937: 66.8842% ( 28) 00:11:13.144 11685.937 - 11738.577: 67.2518% ( 40) 00:11:13.144 11738.577 - 11791.216: 67.7390% ( 53) 00:11:13.144 11791.216 - 11843.855: 68.0790% ( 37) 00:11:13.144 11843.855 - 11896.495: 68.5662% ( 53) 00:11:13.144 11896.495 - 11949.134: 69.1544% ( 64) 00:11:13.144 11949.134 - 12001.773: 69.4945% ( 37) 00:11:13.144 12001.773 - 12054.413: 69.7702% ( 30) 00:11:13.144 12054.413 - 12107.052: 70.1654% ( 43) 00:11:13.144 12107.052 - 12159.692: 70.7996% ( 69) 00:11:13.144 12159.692 - 12212.331: 71.2316% ( 47) 00:11:13.144 12212.331 - 12264.970: 71.5901% ( 39) 00:11:13.144 12264.970 - 12317.610: 71.9026% ( 34) 00:11:13.144 12317.610 - 12370.249: 72.2059% ( 33) 00:11:13.144 12370.249 - 12422.888: 72.4357% ( 25) 00:11:13.144 12422.888 - 12475.528: 72.6287% ( 21) 00:11:13.144 12475.528 - 12528.167: 72.9412% ( 34) 00:11:13.144 12528.167 - 12580.806: 73.2261% ( 31) 00:11:13.144 12580.806 - 12633.446: 73.4283% ( 22) 00:11:13.144 12633.446 - 12686.085: 73.6857% ( 28) 00:11:13.144 12686.085 - 12738.724: 73.8787% ( 21) 00:11:13.144 12738.724 - 12791.364: 73.9982% ( 13) 00:11:13.144 12791.364 - 12844.003: 74.1176% ( 13) 00:11:13.144 12844.003 - 12896.643: 74.2463% ( 14) 00:11:13.144 12896.643 - 12949.282: 74.4301% ( 20) 00:11:13.144 12949.282 - 13001.921: 74.5864% ( 17) 00:11:13.144 13001.921 - 13054.561: 74.7335% ( 16) 00:11:13.144 13054.561 - 13107.200: 74.8897% ( 17) 00:11:13.144 13107.200 - 13159.839: 75.1103% ( 24) 00:11:13.144 13159.839 - 13212.479: 75.3401% ( 25) 00:11:13.144 13212.479 - 13265.118: 75.5423% ( 22) 00:11:13.144 13265.118 - 13317.757: 75.7629% ( 24) 00:11:13.144 13317.757 - 13370.397: 76.0386% ( 30) 00:11:13.144 13370.397 - 13423.036: 76.3327% ( 32) 00:11:13.144 13423.036 - 13475.676: 76.6360% ( 33) 00:11:13.144 13475.676 - 13580.954: 77.0864% ( 49) 00:11:13.144 13580.954 - 13686.233: 77.6195% ( 58) 00:11:13.144 13686.233 - 13791.512: 78.1710% ( 60) 00:11:13.144 13791.512 - 13896.790: 78.5570% ( 42) 00:11:13.144 13896.790 - 14002.069: 79.0625% ( 55) 00:11:13.144 14002.069 - 14107.348: 79.5864% ( 57) 00:11:13.144 14107.348 - 14212.627: 80.3860% ( 87) 00:11:13.144 14212.627 - 14317.905: 81.2408% ( 93) 00:11:13.144 14317.905 - 14423.184: 82.4632% ( 133) 00:11:13.144 14423.184 - 14528.463: 82.9228% ( 50) 00:11:13.144 14528.463 - 14633.741: 83.2537% ( 36) 00:11:13.144 14633.741 - 14739.020: 83.7040% ( 49) 00:11:13.144 14739.020 - 14844.299: 84.4853% ( 85) 00:11:13.144 14844.299 - 14949.578: 85.1011% ( 67) 00:11:13.144 14949.578 - 15054.856: 85.5974% ( 54) 00:11:13.144 15054.856 - 15160.135: 86.1489% ( 60) 00:11:13.144 15160.135 - 15265.414: 86.5717% ( 46) 00:11:13.144 15265.414 - 15370.692: 86.8934% ( 35) 00:11:13.144 15370.692 - 15475.971: 87.2426% ( 38) 00:11:13.144 15475.971 - 15581.250: 87.8125% ( 62) 00:11:13.144 15581.250 - 15686.529: 88.5570% ( 81) 00:11:13.144 15686.529 - 15791.807: 89.0625% ( 55) 00:11:13.144 15791.807 - 15897.086: 89.4761% ( 45) 00:11:13.144 15897.086 - 16002.365: 89.9081% ( 47) 00:11:13.144 16002.365 - 16107.643: 90.4228% ( 56) 00:11:13.144 16107.643 - 16212.922: 90.9467% ( 57) 00:11:13.144 16212.922 - 16318.201: 91.7647% ( 89) 00:11:13.144 16318.201 - 16423.480: 92.2702% ( 55) 00:11:13.144 16423.480 - 16528.758: 92.5919% ( 35) 00:11:13.144 16528.758 - 16634.037: 92.9320% ( 37) 00:11:13.144 16634.037 - 16739.316: 93.2169% ( 31) 00:11:13.144 16739.316 - 16844.594: 93.4283% ( 23) 00:11:13.144 16844.594 - 16949.873: 93.6213% ( 21) 00:11:13.144 16949.873 - 17055.152: 93.8051% ( 20) 00:11:13.144 17055.152 - 17160.431: 93.9706% ( 18) 00:11:13.144 17160.431 - 17265.709: 94.2555% ( 31) 00:11:13.144 17265.709 - 17370.988: 94.5037% ( 27) 00:11:13.144 17370.988 - 17476.267: 94.8346% ( 36) 00:11:13.144 17476.267 - 17581.545: 95.0276% ( 21) 00:11:13.144 17581.545 - 17686.824: 95.1471% ( 13) 00:11:13.144 17686.824 - 17792.103: 95.2298% ( 9) 00:11:13.144 17792.103 - 17897.382: 95.2849% ( 6) 00:11:13.144 17897.382 - 18002.660: 95.3585% ( 8) 00:11:13.144 18002.660 - 18107.939: 95.4688% ( 12) 00:11:13.144 18107.939 - 18213.218: 95.6066% ( 15) 00:11:13.144 18213.218 - 18318.496: 95.7353% ( 14) 00:11:13.144 18318.496 - 18423.775: 95.8640% ( 14) 00:11:13.144 18423.775 - 18529.054: 95.9283% ( 7) 00:11:13.144 18529.054 - 18634.333: 96.1581% ( 25) 00:11:13.144 18634.333 - 18739.611: 96.3511% ( 21) 00:11:13.144 18739.611 - 18844.890: 96.5349% ( 20) 00:11:13.145 18844.890 - 18950.169: 96.6544% ( 13) 00:11:13.145 18950.169 - 19055.447: 96.7647% ( 12) 00:11:13.145 19055.447 - 19160.726: 96.8934% ( 14) 00:11:13.145 19160.726 - 19266.005: 97.0404% ( 16) 00:11:13.145 19266.005 - 19371.284: 97.1691% ( 14) 00:11:13.145 19371.284 - 19476.562: 97.3070% ( 15) 00:11:13.145 19476.562 - 19581.841: 97.3713% ( 7) 00:11:13.145 19581.841 - 19687.120: 97.4632% ( 10) 00:11:13.145 19687.120 - 19792.398: 97.5827% ( 13) 00:11:13.145 19792.398 - 19897.677: 97.7665% ( 20) 00:11:13.145 19897.677 - 20002.956: 98.0147% ( 27) 00:11:13.145 20002.956 - 20108.235: 98.2261% ( 23) 00:11:13.145 20108.235 - 20213.513: 98.3824% ( 17) 00:11:13.145 20213.513 - 20318.792: 98.5018% ( 13) 00:11:13.145 20318.792 - 20424.071: 98.6581% ( 17) 00:11:13.145 20424.071 - 20529.349: 98.7960% ( 15) 00:11:13.145 20529.349 - 20634.628: 98.8235% ( 3) 00:11:13.145 29478.040 - 29688.598: 98.8327% ( 1) 00:11:13.145 29688.598 - 29899.155: 98.8787% ( 5) 00:11:13.145 29899.155 - 30109.712: 98.9246% ( 5) 00:11:13.145 30109.712 - 30320.270: 98.9706% ( 5) 00:11:13.145 30320.270 - 30530.827: 99.0257% ( 6) 00:11:13.145 30530.827 - 30741.385: 99.0717% ( 5) 00:11:13.145 30741.385 - 30951.942: 99.1176% ( 5) 00:11:13.145 30951.942 - 31162.500: 99.1636% ( 5) 00:11:13.145 31162.500 - 31373.057: 99.2188% ( 6) 00:11:13.145 31373.057 - 31583.614: 99.2647% ( 5) 00:11:13.145 31583.614 - 31794.172: 99.3199% ( 6) 00:11:13.145 31794.172 - 32004.729: 99.3658% ( 5) 00:11:13.145 32004.729 - 32215.287: 99.4118% ( 5) 00:11:13.145 40637.584 - 40848.141: 99.4485% ( 4) 00:11:13.145 40848.141 - 41058.699: 99.4945% ( 5) 00:11:13.145 41058.699 - 41269.256: 99.5496% ( 6) 00:11:13.145 41269.256 - 41479.814: 99.5864% ( 4) 00:11:13.145 41479.814 - 41690.371: 99.6415% ( 6) 00:11:13.145 41690.371 - 41900.929: 99.6783% ( 4) 00:11:13.145 41900.929 - 42111.486: 99.7335% ( 6) 00:11:13.145 42111.486 - 42322.043: 99.7794% ( 5) 00:11:13.145 42322.043 - 42532.601: 99.8162% ( 4) 00:11:13.145 42532.601 - 42743.158: 99.8713% ( 6) 00:11:13.145 42743.158 - 42953.716: 99.9173% ( 5) 00:11:13.145 42953.716 - 43164.273: 99.9632% ( 5) 00:11:13.145 43164.273 - 43374.831: 100.0000% ( 4) 00:11:13.145 00:11:13.145 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:13.145 ============================================================================== 00:11:13.145 Range in us Cumulative IO count 00:11:13.145 8474.937 - 8527.576: 0.0092% ( 1) 00:11:13.145 8580.215 - 8632.855: 0.0368% ( 3) 00:11:13.145 8632.855 - 8685.494: 0.0460% ( 1) 00:11:13.145 8685.494 - 8738.133: 0.0827% ( 4) 00:11:13.145 8738.133 - 8790.773: 0.1746% ( 10) 00:11:13.145 8790.773 - 8843.412: 0.4044% ( 25) 00:11:13.145 8843.412 - 8896.051: 0.7445% ( 37) 00:11:13.145 8896.051 - 8948.691: 1.2316% ( 53) 00:11:13.145 8948.691 - 9001.330: 1.9669% ( 80) 00:11:13.145 9001.330 - 9053.969: 2.5827% ( 67) 00:11:13.145 9053.969 - 9106.609: 3.3915% ( 88) 00:11:13.145 9106.609 - 9159.248: 4.3750% ( 107) 00:11:13.145 9159.248 - 9211.888: 5.5699% ( 130) 00:11:13.145 9211.888 - 9264.527: 6.9393% ( 149) 00:11:13.145 9264.527 - 9317.166: 8.8695% ( 210) 00:11:13.145 9317.166 - 9369.806: 11.2408% ( 258) 00:11:13.145 9369.806 - 9422.445: 13.7592% ( 274) 00:11:13.145 9422.445 - 9475.084: 16.3235% ( 279) 00:11:13.145 9475.084 - 9527.724: 19.0349% ( 295) 00:11:13.145 9527.724 - 9580.363: 21.9393% ( 316) 00:11:13.145 9580.363 - 9633.002: 24.8897% ( 321) 00:11:13.145 9633.002 - 9685.642: 27.9688% ( 335) 00:11:13.145 9685.642 - 9738.281: 31.2500% ( 357) 00:11:13.145 9738.281 - 9790.920: 34.6048% ( 365) 00:11:13.145 9790.920 - 9843.560: 37.0221% ( 263) 00:11:13.145 9843.560 - 9896.199: 39.2371% ( 241) 00:11:13.145 9896.199 - 9948.839: 41.0202% ( 194) 00:11:13.145 9948.839 - 10001.478: 42.5368% ( 165) 00:11:13.145 10001.478 - 10054.117: 44.0993% ( 170) 00:11:13.145 10054.117 - 10106.757: 45.6250% ( 166) 00:11:13.145 10106.757 - 10159.396: 47.0588% ( 156) 00:11:13.145 10159.396 - 10212.035: 48.7408% ( 183) 00:11:13.145 10212.035 - 10264.675: 50.4320% ( 184) 00:11:13.145 10264.675 - 10317.314: 51.9485% ( 165) 00:11:13.145 10317.314 - 10369.953: 53.4467% ( 163) 00:11:13.145 10369.953 - 10422.593: 55.0460% ( 174) 00:11:13.145 10422.593 - 10475.232: 56.0754% ( 112) 00:11:13.145 10475.232 - 10527.871: 57.0312% ( 104) 00:11:13.145 10527.871 - 10580.511: 57.9963% ( 105) 00:11:13.145 10580.511 - 10633.150: 59.0165% ( 111) 00:11:13.145 10633.150 - 10685.790: 59.8162% ( 87) 00:11:13.145 10685.790 - 10738.429: 60.6985% ( 96) 00:11:13.145 10738.429 - 10791.068: 61.7463% ( 114) 00:11:13.145 10791.068 - 10843.708: 62.6379% ( 97) 00:11:13.145 10843.708 - 10896.347: 63.2537% ( 67) 00:11:13.145 10896.347 - 10948.986: 63.5938% ( 37) 00:11:13.145 10948.986 - 11001.626: 63.8971% ( 33) 00:11:13.145 11001.626 - 11054.265: 64.2004% ( 33) 00:11:13.145 11054.265 - 11106.904: 64.4577% ( 28) 00:11:13.145 11106.904 - 11159.544: 64.6507% ( 21) 00:11:13.145 11159.544 - 11212.183: 64.8254% ( 19) 00:11:13.145 11212.183 - 11264.822: 64.9816% ( 17) 00:11:13.145 11264.822 - 11317.462: 65.1471% ( 18) 00:11:13.145 11317.462 - 11370.101: 65.3033% ( 17) 00:11:13.145 11370.101 - 11422.741: 65.5423% ( 26) 00:11:13.145 11422.741 - 11475.380: 65.7537% ( 23) 00:11:13.145 11475.380 - 11528.019: 66.0294% ( 30) 00:11:13.145 11528.019 - 11580.659: 66.5165% ( 53) 00:11:13.145 11580.659 - 11633.298: 66.9301% ( 45) 00:11:13.145 11633.298 - 11685.937: 67.4449% ( 56) 00:11:13.145 11685.937 - 11738.577: 67.8860% ( 48) 00:11:13.145 11738.577 - 11791.216: 68.3088% ( 46) 00:11:13.145 11791.216 - 11843.855: 68.6489% ( 37) 00:11:13.145 11843.855 - 11896.495: 68.9706% ( 35) 00:11:13.145 11896.495 - 11949.134: 69.6232% ( 71) 00:11:13.145 11949.134 - 12001.773: 70.0184% ( 43) 00:11:13.145 12001.773 - 12054.413: 70.3952% ( 41) 00:11:13.145 12054.413 - 12107.052: 70.7445% ( 38) 00:11:13.145 12107.052 - 12159.692: 71.2224% ( 52) 00:11:13.145 12159.692 - 12212.331: 71.5533% ( 36) 00:11:13.145 12212.331 - 12264.970: 72.0037% ( 49) 00:11:13.145 12264.970 - 12317.610: 72.3438% ( 37) 00:11:13.145 12317.610 - 12370.249: 72.6011% ( 28) 00:11:13.145 12370.249 - 12422.888: 72.8493% ( 27) 00:11:13.145 12422.888 - 12475.528: 73.0147% ( 18) 00:11:13.145 12475.528 - 12528.167: 73.1618% ( 16) 00:11:13.145 12528.167 - 12580.806: 73.3180% ( 17) 00:11:13.145 12580.806 - 12633.446: 73.4375% ( 13) 00:11:13.145 12633.446 - 12686.085: 73.5846% ( 16) 00:11:13.145 12686.085 - 12738.724: 73.7316% ( 16) 00:11:13.145 12738.724 - 12791.364: 73.8327% ( 11) 00:11:13.145 12791.364 - 12844.003: 73.9430% ( 12) 00:11:13.145 12844.003 - 12896.643: 74.0533% ( 12) 00:11:13.145 12896.643 - 12949.282: 74.1176% ( 7) 00:11:13.145 12949.282 - 13001.921: 74.2831% ( 18) 00:11:13.145 13001.921 - 13054.561: 74.6507% ( 40) 00:11:13.145 13054.561 - 13107.200: 74.8897% ( 26) 00:11:13.145 13107.200 - 13159.839: 75.0184% ( 14) 00:11:13.145 13159.839 - 13212.479: 75.1195% ( 11) 00:11:13.145 13212.479 - 13265.118: 75.2298% ( 12) 00:11:13.145 13265.118 - 13317.757: 75.3309% ( 11) 00:11:13.145 13317.757 - 13370.397: 75.4871% ( 17) 00:11:13.145 13370.397 - 13423.036: 75.6434% ( 17) 00:11:13.145 13423.036 - 13475.676: 75.8364% ( 21) 00:11:13.145 13475.676 - 13580.954: 76.2868% ( 49) 00:11:13.145 13580.954 - 13686.233: 76.9945% ( 77) 00:11:13.145 13686.233 - 13791.512: 78.0239% ( 112) 00:11:13.145 13791.512 - 13896.790: 79.2096% ( 129) 00:11:13.145 13896.790 - 14002.069: 80.2665% ( 115) 00:11:13.145 14002.069 - 14107.348: 81.1029% ( 91) 00:11:13.145 14107.348 - 14212.627: 81.8382% ( 80) 00:11:13.145 14212.627 - 14317.905: 82.3989% ( 61) 00:11:13.145 14317.905 - 14423.184: 82.9320% ( 58) 00:11:13.145 14423.184 - 14528.463: 83.4007% ( 51) 00:11:13.145 14528.463 - 14633.741: 83.7592% ( 39) 00:11:13.145 14633.741 - 14739.020: 83.9430% ( 20) 00:11:13.145 14739.020 - 14844.299: 84.1636% ( 24) 00:11:13.145 14844.299 - 14949.578: 84.3842% ( 24) 00:11:13.145 14949.578 - 15054.856: 84.6232% ( 26) 00:11:13.145 15054.856 - 15160.135: 84.9724% ( 38) 00:11:13.145 15160.135 - 15265.414: 85.3217% ( 38) 00:11:13.145 15265.414 - 15370.692: 85.8180% ( 54) 00:11:13.145 15370.692 - 15475.971: 86.6636% ( 92) 00:11:13.145 15475.971 - 15581.250: 87.4540% ( 86) 00:11:13.145 15581.250 - 15686.529: 88.0790% ( 68) 00:11:13.145 15686.529 - 15791.807: 88.5018% ( 46) 00:11:13.145 15791.807 - 15897.086: 89.2739% ( 84) 00:11:13.145 15897.086 - 16002.365: 89.9357% ( 72) 00:11:13.146 16002.365 - 16107.643: 90.4412% ( 55) 00:11:13.146 16107.643 - 16212.922: 91.0018% ( 61) 00:11:13.146 16212.922 - 16318.201: 91.5625% ( 61) 00:11:13.146 16318.201 - 16423.480: 92.0129% ( 49) 00:11:13.146 16423.480 - 16528.758: 92.4816% ( 51) 00:11:13.146 16528.758 - 16634.037: 93.0055% ( 57) 00:11:13.146 16634.037 - 16739.316: 93.4467% ( 48) 00:11:13.146 16739.316 - 16844.594: 93.6397% ( 21) 00:11:13.146 16844.594 - 16949.873: 93.8235% ( 20) 00:11:13.146 16949.873 - 17055.152: 94.1360% ( 34) 00:11:13.146 17055.152 - 17160.431: 94.3107% ( 19) 00:11:13.146 17160.431 - 17265.709: 94.4669% ( 17) 00:11:13.146 17265.709 - 17370.988: 94.5588% ( 10) 00:11:13.146 17370.988 - 17476.267: 94.6232% ( 7) 00:11:13.146 17476.267 - 17581.545: 94.7518% ( 14) 00:11:13.146 17581.545 - 17686.824: 94.9173% ( 18) 00:11:13.146 17686.824 - 17792.103: 95.2022% ( 31) 00:11:13.146 17792.103 - 17897.382: 95.3768% ( 19) 00:11:13.146 17897.382 - 18002.660: 95.5790% ( 22) 00:11:13.146 18002.660 - 18107.939: 95.7261% ( 16) 00:11:13.146 18107.939 - 18213.218: 95.8640% ( 15) 00:11:13.146 18213.218 - 18318.496: 96.0202% ( 17) 00:11:13.146 18318.496 - 18423.775: 96.1489% ( 14) 00:11:13.146 18423.775 - 18529.054: 96.2500% ( 11) 00:11:13.146 18529.054 - 18634.333: 96.4246% ( 19) 00:11:13.146 18634.333 - 18739.611: 96.7463% ( 35) 00:11:13.146 18739.611 - 18844.890: 96.9669% ( 24) 00:11:13.146 18844.890 - 18950.169: 97.1048% ( 15) 00:11:13.146 18950.169 - 19055.447: 97.2335% ( 14) 00:11:13.146 19055.447 - 19160.726: 97.3989% ( 18) 00:11:13.146 19160.726 - 19266.005: 97.5827% ( 20) 00:11:13.146 19266.005 - 19371.284: 97.7665% ( 20) 00:11:13.146 19371.284 - 19476.562: 98.0423% ( 30) 00:11:13.146 19476.562 - 19581.841: 98.0790% ( 4) 00:11:13.146 19581.841 - 19687.120: 98.1066% ( 3) 00:11:13.146 19687.120 - 19792.398: 98.1342% ( 3) 00:11:13.146 19792.398 - 19897.677: 98.1618% ( 3) 00:11:13.146 19897.677 - 20002.956: 98.1985% ( 4) 00:11:13.146 20002.956 - 20108.235: 98.3272% ( 14) 00:11:13.146 20108.235 - 20213.513: 98.4283% ( 11) 00:11:13.146 20213.513 - 20318.792: 98.6121% ( 20) 00:11:13.146 20318.792 - 20424.071: 98.6949% ( 9) 00:11:13.146 20424.071 - 20529.349: 98.7592% ( 7) 00:11:13.146 20529.349 - 20634.628: 98.7960% ( 4) 00:11:13.146 20634.628 - 20739.907: 98.8235% ( 3) 00:11:13.146 28635.810 - 28846.368: 98.8327% ( 1) 00:11:13.146 29056.925 - 29267.483: 98.8419% ( 1) 00:11:13.146 29478.040 - 29688.598: 98.9246% ( 9) 00:11:13.146 29688.598 - 29899.155: 99.0625% ( 15) 00:11:13.146 29899.155 - 30109.712: 99.2004% ( 15) 00:11:13.146 30109.712 - 30320.270: 99.2371% ( 4) 00:11:13.146 30320.270 - 30530.827: 99.2739% ( 4) 00:11:13.146 30530.827 - 30741.385: 99.3107% ( 4) 00:11:13.146 30741.385 - 30951.942: 99.3474% ( 4) 00:11:13.146 30951.942 - 31162.500: 99.3750% ( 3) 00:11:13.146 31162.500 - 31373.057: 99.4026% ( 3) 00:11:13.146 31373.057 - 31583.614: 99.4118% ( 1) 00:11:13.146 38321.452 - 38532.010: 99.4577% ( 5) 00:11:13.146 38532.010 - 38742.567: 99.5129% ( 6) 00:11:13.146 39584.797 - 39795.354: 99.5312% ( 2) 00:11:13.146 39795.354 - 40005.912: 99.5680% ( 4) 00:11:13.146 40005.912 - 40216.469: 99.6048% ( 4) 00:11:13.146 40216.469 - 40427.027: 99.6415% ( 4) 00:11:13.146 40427.027 - 40637.584: 99.6783% ( 4) 00:11:13.146 40637.584 - 40848.141: 99.7243% ( 5) 00:11:13.146 40848.141 - 41058.699: 99.7702% ( 5) 00:11:13.146 41058.699 - 41269.256: 99.8162% ( 5) 00:11:13.146 41269.256 - 41479.814: 99.8621% ( 5) 00:11:13.146 41479.814 - 41690.371: 99.9173% ( 6) 00:11:13.146 41690.371 - 41900.929: 99.9632% ( 5) 00:11:13.146 41900.929 - 42111.486: 100.0000% ( 4) 00:11:13.146 00:11:13.146 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:13.146 ============================================================================== 00:11:13.146 Range in us Cumulative IO count 00:11:13.146 8527.576 - 8580.215: 0.0092% ( 1) 00:11:13.146 8685.494 - 8738.133: 0.1379% ( 14) 00:11:13.146 8738.133 - 8790.773: 0.3676% ( 25) 00:11:13.146 8790.773 - 8843.412: 0.6526% ( 31) 00:11:13.146 8843.412 - 8896.051: 1.1581% ( 55) 00:11:13.146 8896.051 - 8948.691: 1.6636% ( 55) 00:11:13.146 8948.691 - 9001.330: 2.3897% ( 79) 00:11:13.146 9001.330 - 9053.969: 2.9320% ( 59) 00:11:13.146 9053.969 - 9106.609: 3.7316% ( 87) 00:11:13.146 9106.609 - 9159.248: 4.4853% ( 82) 00:11:13.146 9159.248 - 9211.888: 5.4044% ( 100) 00:11:13.146 9211.888 - 9264.527: 6.8382% ( 156) 00:11:13.146 9264.527 - 9317.166: 8.5938% ( 191) 00:11:13.146 9317.166 - 9369.806: 10.8088% ( 241) 00:11:13.146 9369.806 - 9422.445: 13.4926% ( 292) 00:11:13.146 9422.445 - 9475.084: 16.3419% ( 310) 00:11:13.146 9475.084 - 9527.724: 18.9430% ( 283) 00:11:13.146 9527.724 - 9580.363: 21.7188% ( 302) 00:11:13.146 9580.363 - 9633.002: 24.4301% ( 295) 00:11:13.146 9633.002 - 9685.642: 27.6471% ( 350) 00:11:13.146 9685.642 - 9738.281: 30.5974% ( 321) 00:11:13.146 9738.281 - 9790.920: 33.2169% ( 285) 00:11:13.146 9790.920 - 9843.560: 36.0662% ( 310) 00:11:13.146 9843.560 - 9896.199: 38.2169% ( 234) 00:11:13.146 9896.199 - 9948.839: 40.0827% ( 203) 00:11:13.146 9948.839 - 10001.478: 41.8382% ( 191) 00:11:13.146 10001.478 - 10054.117: 43.3364% ( 163) 00:11:13.146 10054.117 - 10106.757: 44.8805% ( 168) 00:11:13.146 10106.757 - 10159.396: 46.5717% ( 184) 00:11:13.146 10159.396 - 10212.035: 47.9688% ( 152) 00:11:13.146 10212.035 - 10264.675: 49.5680% ( 174) 00:11:13.146 10264.675 - 10317.314: 51.3879% ( 198) 00:11:13.146 10317.314 - 10369.953: 52.8585% ( 160) 00:11:13.146 10369.953 - 10422.593: 53.9798% ( 122) 00:11:13.146 10422.593 - 10475.232: 55.0368% ( 115) 00:11:13.146 10475.232 - 10527.871: 56.1581% ( 122) 00:11:13.146 10527.871 - 10580.511: 57.1140% ( 104) 00:11:13.146 10580.511 - 10633.150: 58.2077% ( 119) 00:11:13.146 10633.150 - 10685.790: 59.1912% ( 107) 00:11:13.146 10685.790 - 10738.429: 60.3309% ( 124) 00:11:13.146 10738.429 - 10791.068: 61.3971% ( 116) 00:11:13.146 10791.068 - 10843.708: 62.2702% ( 95) 00:11:13.146 10843.708 - 10896.347: 62.8401% ( 62) 00:11:13.146 10896.347 - 10948.986: 63.2537% ( 45) 00:11:13.146 10948.986 - 11001.626: 63.5846% ( 36) 00:11:13.146 11001.626 - 11054.265: 63.8419% ( 28) 00:11:13.146 11054.265 - 11106.904: 64.0074% ( 18) 00:11:13.146 11106.904 - 11159.544: 64.1912% ( 20) 00:11:13.146 11159.544 - 11212.183: 64.4485% ( 28) 00:11:13.146 11212.183 - 11264.822: 64.6232% ( 19) 00:11:13.146 11264.822 - 11317.462: 64.7610% ( 15) 00:11:13.146 11317.462 - 11370.101: 65.0735% ( 34) 00:11:13.146 11370.101 - 11422.741: 65.4136% ( 37) 00:11:13.146 11422.741 - 11475.380: 65.9835% ( 62) 00:11:13.146 11475.380 - 11528.019: 66.5074% ( 57) 00:11:13.146 11528.019 - 11580.659: 67.0680% ( 61) 00:11:13.146 11580.659 - 11633.298: 67.6379% ( 62) 00:11:13.146 11633.298 - 11685.937: 68.0699% ( 47) 00:11:13.146 11685.937 - 11738.577: 68.6397% ( 62) 00:11:13.146 11738.577 - 11791.216: 68.8603% ( 24) 00:11:13.146 11791.216 - 11843.855: 69.0257% ( 18) 00:11:13.146 11843.855 - 11896.495: 69.1912% ( 18) 00:11:13.146 11896.495 - 11949.134: 69.3566% ( 18) 00:11:13.146 11949.134 - 12001.773: 69.5772% ( 24) 00:11:13.146 12001.773 - 12054.413: 69.8162% ( 26) 00:11:13.146 12054.413 - 12107.052: 70.0643% ( 27) 00:11:13.146 12107.052 - 12159.692: 70.5147% ( 49) 00:11:13.146 12159.692 - 12212.331: 70.9467% ( 47) 00:11:13.146 12212.331 - 12264.970: 71.4798% ( 58) 00:11:13.146 12264.970 - 12317.610: 71.8015% ( 35) 00:11:13.146 12317.610 - 12370.249: 72.0129% ( 23) 00:11:13.146 12370.249 - 12422.888: 72.3529% ( 37) 00:11:13.146 12422.888 - 12475.528: 72.7390% ( 42) 00:11:13.146 12475.528 - 12528.167: 72.9504% ( 23) 00:11:13.146 12528.167 - 12580.806: 73.1342% ( 20) 00:11:13.146 12580.806 - 12633.446: 73.3364% ( 22) 00:11:13.146 12633.446 - 12686.085: 73.5386% ( 22) 00:11:13.146 12686.085 - 12738.724: 73.8327% ( 32) 00:11:13.146 12738.724 - 12791.364: 74.1636% ( 36) 00:11:13.146 12791.364 - 12844.003: 74.5221% ( 39) 00:11:13.146 12844.003 - 12896.643: 74.7610% ( 26) 00:11:13.146 12896.643 - 12949.282: 74.8897% ( 14) 00:11:13.146 12949.282 - 13001.921: 74.9632% ( 8) 00:11:13.146 13001.921 - 13054.561: 75.0276% ( 7) 00:11:13.146 13054.561 - 13107.200: 75.1471% ( 13) 00:11:13.146 13107.200 - 13159.839: 75.2665% ( 13) 00:11:13.146 13159.839 - 13212.479: 75.4136% ( 16) 00:11:13.146 13212.479 - 13265.118: 75.6618% ( 27) 00:11:13.146 13265.118 - 13317.757: 75.9099% ( 27) 00:11:13.146 13317.757 - 13370.397: 76.3235% ( 45) 00:11:13.146 13370.397 - 13423.036: 76.8566% ( 58) 00:11:13.146 13423.036 - 13475.676: 77.4265% ( 62) 00:11:13.146 13475.676 - 13580.954: 78.3272% ( 98) 00:11:13.146 13580.954 - 13686.233: 78.9798% ( 71) 00:11:13.146 13686.233 - 13791.512: 79.4577% ( 52) 00:11:13.146 13791.512 - 13896.790: 80.0919% ( 69) 00:11:13.146 13896.790 - 14002.069: 80.5515% ( 50) 00:11:13.146 14002.069 - 14107.348: 80.8824% ( 36) 00:11:13.146 14107.348 - 14212.627: 81.4522% ( 62) 00:11:13.146 14212.627 - 14317.905: 81.9485% ( 54) 00:11:13.146 14317.905 - 14423.184: 82.2794% ( 36) 00:11:13.146 14423.184 - 14528.463: 82.5460% ( 29) 00:11:13.146 14528.463 - 14633.741: 82.7849% ( 26) 00:11:13.146 14633.741 - 14739.020: 83.0055% ( 24) 00:11:13.146 14739.020 - 14844.299: 83.3180% ( 34) 00:11:13.146 14844.299 - 14949.578: 83.8879% ( 62) 00:11:13.146 14949.578 - 15054.856: 84.4945% ( 66) 00:11:13.146 15054.856 - 15160.135: 85.0460% ( 60) 00:11:13.146 15160.135 - 15265.414: 85.4596% ( 45) 00:11:13.146 15265.414 - 15370.692: 85.7904% ( 36) 00:11:13.146 15370.692 - 15475.971: 86.5257% ( 80) 00:11:13.146 15475.971 - 15581.250: 87.2978% ( 84) 00:11:13.146 15581.250 - 15686.529: 88.1250% ( 90) 00:11:13.146 15686.529 - 15791.807: 89.2279% ( 120) 00:11:13.146 15791.807 - 15897.086: 90.1746% ( 103) 00:11:13.146 15897.086 - 16002.365: 90.6618% ( 53) 00:11:13.146 16002.365 - 16107.643: 91.3051% ( 70) 00:11:13.146 16107.643 - 16212.922: 91.6636% ( 39) 00:11:13.146 16212.922 - 16318.201: 92.0864% ( 46) 00:11:13.146 16318.201 - 16423.480: 92.3897% ( 33) 00:11:13.146 16423.480 - 16528.758: 92.6195% ( 25) 00:11:13.147 16528.758 - 16634.037: 92.9044% ( 31) 00:11:13.147 16634.037 - 16739.316: 93.1434% ( 26) 00:11:13.147 16739.316 - 16844.594: 93.2996% ( 17) 00:11:13.147 16844.594 - 16949.873: 93.6489% ( 38) 00:11:13.147 16949.873 - 17055.152: 93.8419% ( 21) 00:11:13.147 17055.152 - 17160.431: 94.0809% ( 26) 00:11:13.147 17160.431 - 17265.709: 94.3474% ( 29) 00:11:13.147 17265.709 - 17370.988: 94.6232% ( 30) 00:11:13.147 17370.988 - 17476.267: 95.0551% ( 47) 00:11:13.147 17476.267 - 17581.545: 95.2849% ( 25) 00:11:13.147 17581.545 - 17686.824: 95.5515% ( 29) 00:11:13.147 17686.824 - 17792.103: 95.7904% ( 26) 00:11:13.147 17792.103 - 17897.382: 96.0754% ( 31) 00:11:13.147 17897.382 - 18002.660: 96.2592% ( 20) 00:11:13.147 18002.660 - 18107.939: 96.4246% ( 18) 00:11:13.147 18107.939 - 18213.218: 96.5625% ( 15) 00:11:13.147 18213.218 - 18318.496: 96.6544% ( 10) 00:11:13.147 18318.496 - 18423.775: 96.7096% ( 6) 00:11:13.147 18423.775 - 18529.054: 96.7647% ( 6) 00:11:13.147 18529.054 - 18634.333: 96.8566% ( 10) 00:11:13.147 18634.333 - 18739.611: 96.9853% ( 14) 00:11:13.147 18739.611 - 18844.890: 97.3254% ( 37) 00:11:13.147 18844.890 - 18950.169: 97.4632% ( 15) 00:11:13.147 18950.169 - 19055.447: 97.6654% ( 22) 00:11:13.147 19055.447 - 19160.726: 97.9044% ( 26) 00:11:13.147 19160.726 - 19266.005: 97.9688% ( 7) 00:11:13.147 19266.005 - 19371.284: 98.0515% ( 9) 00:11:13.147 19371.284 - 19476.562: 98.1250% ( 8) 00:11:13.147 19476.562 - 19581.841: 98.1526% ( 3) 00:11:13.147 19581.841 - 19687.120: 98.1801% ( 3) 00:11:13.147 19687.120 - 19792.398: 98.1985% ( 2) 00:11:13.147 19792.398 - 19897.677: 98.2261% ( 3) 00:11:13.147 19897.677 - 20002.956: 98.2353% ( 1) 00:11:13.147 20318.792 - 20424.071: 98.2721% ( 4) 00:11:13.147 20424.071 - 20529.349: 98.3548% ( 9) 00:11:13.147 20529.349 - 20634.628: 98.6305% ( 30) 00:11:13.147 20634.628 - 20739.907: 98.7316% ( 11) 00:11:13.147 20739.907 - 20845.186: 98.8051% ( 8) 00:11:13.147 20845.186 - 20950.464: 98.8235% ( 2) 00:11:13.147 28214.696 - 28425.253: 98.8419% ( 2) 00:11:13.147 28425.253 - 28635.810: 98.8971% ( 6) 00:11:13.147 28635.810 - 28846.368: 98.9890% ( 10) 00:11:13.147 28846.368 - 29056.925: 99.0625% ( 8) 00:11:13.147 29056.925 - 29267.483: 99.1452% ( 9) 00:11:13.147 29267.483 - 29478.040: 99.1912% ( 5) 00:11:13.147 29478.040 - 29688.598: 99.2371% ( 5) 00:11:13.147 29688.598 - 29899.155: 99.2739% ( 4) 00:11:13.147 29899.155 - 30109.712: 99.3199% ( 5) 00:11:13.147 30109.712 - 30320.270: 99.3566% ( 4) 00:11:13.147 30320.270 - 30530.827: 99.4026% ( 5) 00:11:13.147 30530.827 - 30741.385: 99.4118% ( 1) 00:11:13.147 37268.665 - 37479.222: 99.4393% ( 3) 00:11:13.147 37479.222 - 37689.780: 99.4669% ( 3) 00:11:13.147 37689.780 - 37900.337: 99.5404% ( 8) 00:11:13.147 37900.337 - 38110.895: 99.6048% ( 7) 00:11:13.147 38532.010 - 38742.567: 99.6324% ( 3) 00:11:13.147 38742.567 - 38953.124: 99.6783% ( 5) 00:11:13.147 38953.124 - 39163.682: 99.7151% ( 4) 00:11:13.147 39163.682 - 39374.239: 99.7518% ( 4) 00:11:13.147 39374.239 - 39584.797: 99.7886% ( 4) 00:11:13.147 39584.797 - 39795.354: 99.8346% ( 5) 00:11:13.147 39795.354 - 40005.912: 99.8713% ( 4) 00:11:13.147 40005.912 - 40216.469: 99.9173% ( 5) 00:11:13.147 40216.469 - 40427.027: 99.9632% ( 5) 00:11:13.147 40427.027 - 40637.584: 100.0000% ( 4) 00:11:13.147 00:11:13.147 11:18:40 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:13.147 00:11:13.147 real 0m2.703s 00:11:13.147 user 0m2.270s 00:11:13.147 sys 0m0.304s 00:11:13.147 11:18:40 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.147 11:18:40 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:11:13.147 ************************************ 00:11:13.147 END TEST nvme_perf 00:11:13.147 ************************************ 00:11:13.147 11:18:40 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:13.147 11:18:40 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:13.147 11:18:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.147 11:18:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:13.147 ************************************ 00:11:13.147 START TEST nvme_hello_world 00:11:13.147 ************************************ 00:11:13.147 11:18:40 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:13.406 Initializing NVMe Controllers 00:11:13.406 Attached to 0000:00:10.0 00:11:13.406 Namespace ID: 1 size: 6GB 00:11:13.406 Attached to 0000:00:11.0 00:11:13.406 Namespace ID: 1 size: 5GB 00:11:13.406 Attached to 0000:00:13.0 00:11:13.406 Namespace ID: 1 size: 1GB 00:11:13.406 Attached to 0000:00:12.0 00:11:13.406 Namespace ID: 1 size: 4GB 00:11:13.406 Namespace ID: 2 size: 4GB 00:11:13.406 Namespace ID: 3 size: 4GB 00:11:13.406 Initialization complete. 00:11:13.406 INFO: using host memory buffer for IO 00:11:13.406 Hello world! 00:11:13.406 INFO: using host memory buffer for IO 00:11:13.406 Hello world! 00:11:13.406 INFO: using host memory buffer for IO 00:11:13.406 Hello world! 00:11:13.406 INFO: using host memory buffer for IO 00:11:13.406 Hello world! 00:11:13.406 INFO: using host memory buffer for IO 00:11:13.406 Hello world! 00:11:13.406 INFO: using host memory buffer for IO 00:11:13.406 Hello world! 00:11:13.406 00:11:13.406 real 0m0.296s 00:11:13.406 user 0m0.108s 00:11:13.406 sys 0m0.145s 00:11:13.406 ************************************ 00:11:13.406 END TEST nvme_hello_world 00:11:13.406 ************************************ 00:11:13.406 11:18:40 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.406 11:18:40 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:13.664 11:18:40 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:13.664 11:18:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:13.664 11:18:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.664 11:18:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:13.664 ************************************ 00:11:13.664 START TEST nvme_sgl 00:11:13.664 ************************************ 00:11:13.664 11:18:40 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:13.923 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:11:13.923 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:11:13.923 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:11:13.923 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:11:13.923 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:11:13.923 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:11:13.923 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:11:13.923 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:11:13.923 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:11:13.923 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:11:13.923 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:11:13.923 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:11:13.923 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:11:13.923 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:11:13.923 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:11:13.923 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:11:13.923 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:11:13.923 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:11:13.923 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:11:13.923 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:11:13.923 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:11:13.923 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:11:13.923 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:11:13.923 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:11:13.923 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:11:13.923 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:11:13.923 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:11:13.923 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:11:13.923 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:11:13.923 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:11:13.923 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:11:13.923 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:11:13.923 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:11:13.923 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:11:13.923 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:11:13.923 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:11:13.923 NVMe Readv/Writev Request test 00:11:13.923 Attached to 0000:00:10.0 00:11:13.923 Attached to 0000:00:11.0 00:11:13.923 Attached to 0000:00:13.0 00:11:13.923 Attached to 0000:00:12.0 00:11:13.923 0000:00:10.0: build_io_request_2 test passed 00:11:13.923 0000:00:10.0: build_io_request_4 test passed 00:11:13.923 0000:00:10.0: build_io_request_5 test passed 00:11:13.923 0000:00:10.0: build_io_request_6 test passed 00:11:13.923 0000:00:10.0: build_io_request_7 test passed 00:11:13.923 0000:00:10.0: build_io_request_10 test passed 00:11:13.923 0000:00:11.0: build_io_request_2 test passed 00:11:13.923 0000:00:11.0: build_io_request_4 test passed 00:11:13.923 0000:00:11.0: build_io_request_5 test passed 00:11:13.923 0000:00:11.0: build_io_request_6 test passed 00:11:13.923 0000:00:11.0: build_io_request_7 test passed 00:11:13.923 0000:00:11.0: build_io_request_10 test passed 00:11:13.923 Cleaning up... 00:11:13.923 00:11:13.923 real 0m0.380s 00:11:13.923 user 0m0.179s 00:11:13.923 sys 0m0.156s 00:11:13.923 11:18:40 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.923 11:18:40 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:11:13.923 ************************************ 00:11:13.923 END TEST nvme_sgl 00:11:13.923 ************************************ 00:11:13.923 11:18:40 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:13.923 11:18:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:13.923 11:18:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.923 11:18:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:13.923 ************************************ 00:11:13.923 START TEST nvme_e2edp 00:11:13.923 ************************************ 00:11:13.923 11:18:40 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:14.181 NVMe Write/Read with End-to-End data protection test 00:11:14.181 Attached to 0000:00:10.0 00:11:14.181 Attached to 0000:00:11.0 00:11:14.181 Attached to 0000:00:13.0 00:11:14.181 Attached to 0000:00:12.0 00:11:14.181 Cleaning up... 00:11:14.181 ************************************ 00:11:14.181 END TEST nvme_e2edp 00:11:14.181 ************************************ 00:11:14.181 00:11:14.181 real 0m0.291s 00:11:14.181 user 0m0.107s 00:11:14.181 sys 0m0.129s 00:11:14.181 11:18:41 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.181 11:18:41 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:11:14.440 11:18:41 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:14.440 11:18:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:14.440 11:18:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.440 11:18:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:14.440 ************************************ 00:11:14.440 START TEST nvme_reserve 00:11:14.440 ************************************ 00:11:14.440 11:18:41 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:14.698 ===================================================== 00:11:14.698 NVMe Controller at PCI bus 0, device 16, function 0 00:11:14.698 ===================================================== 00:11:14.698 Reservations: Not Supported 00:11:14.698 ===================================================== 00:11:14.698 NVMe Controller at PCI bus 0, device 17, function 0 00:11:14.698 ===================================================== 00:11:14.698 Reservations: Not Supported 00:11:14.698 ===================================================== 00:11:14.699 NVMe Controller at PCI bus 0, device 19, function 0 00:11:14.699 ===================================================== 00:11:14.699 Reservations: Not Supported 00:11:14.699 ===================================================== 00:11:14.699 NVMe Controller at PCI bus 0, device 18, function 0 00:11:14.699 ===================================================== 00:11:14.699 Reservations: Not Supported 00:11:14.699 Reservation test passed 00:11:14.699 ************************************ 00:11:14.699 END TEST nvme_reserve 00:11:14.699 ************************************ 00:11:14.699 00:11:14.699 real 0m0.292s 00:11:14.699 user 0m0.107s 00:11:14.699 sys 0m0.142s 00:11:14.699 11:18:41 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.699 11:18:41 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:11:14.699 11:18:41 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:14.699 11:18:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:14.699 11:18:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.699 11:18:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:14.699 ************************************ 00:11:14.699 START TEST nvme_err_injection 00:11:14.699 ************************************ 00:11:14.699 11:18:41 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:14.956 NVMe Error Injection test 00:11:14.956 Attached to 0000:00:10.0 00:11:14.956 Attached to 0000:00:11.0 00:11:14.956 Attached to 0000:00:13.0 00:11:14.956 Attached to 0000:00:12.0 00:11:14.956 0000:00:12.0: get features failed as expected 00:11:14.956 0000:00:10.0: get features failed as expected 00:11:14.956 0000:00:11.0: get features failed as expected 00:11:14.956 0000:00:13.0: get features failed as expected 00:11:14.957 0000:00:11.0: get features successfully as expected 00:11:14.957 0000:00:13.0: get features successfully as expected 00:11:14.957 0000:00:12.0: get features successfully as expected 00:11:14.957 0000:00:10.0: get features successfully as expected 00:11:14.957 0000:00:11.0: read failed as expected 00:11:14.957 0000:00:10.0: read failed as expected 00:11:14.957 0000:00:13.0: read failed as expected 00:11:14.957 0000:00:12.0: read failed as expected 00:11:14.957 0000:00:11.0: read successfully as expected 00:11:14.957 0000:00:13.0: read successfully as expected 00:11:14.957 0000:00:10.0: read successfully as expected 00:11:14.957 0000:00:12.0: read successfully as expected 00:11:14.957 Cleaning up... 00:11:14.957 00:11:14.957 real 0m0.293s 00:11:14.957 user 0m0.108s 00:11:14.957 sys 0m0.139s 00:11:14.957 ************************************ 00:11:14.957 END TEST nvme_err_injection 00:11:14.957 ************************************ 00:11:14.957 11:18:42 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.957 11:18:42 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:11:14.957 11:18:42 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:14.957 11:18:42 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:11:14.957 11:18:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.957 11:18:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:15.215 ************************************ 00:11:15.216 START TEST nvme_overhead 00:11:15.216 ************************************ 00:11:15.216 11:18:42 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:16.593 Initializing NVMe Controllers 00:11:16.593 Attached to 0000:00:10.0 00:11:16.593 Attached to 0000:00:11.0 00:11:16.593 Attached to 0000:00:13.0 00:11:16.593 Attached to 0000:00:12.0 00:11:16.593 Initialization complete. Launching workers. 00:11:16.593 submit (in ns) avg, min, max = 13633.3, 10901.2, 97157.4 00:11:16.593 complete (in ns) avg, min, max = 8437.4, 7812.9, 54570.3 00:11:16.593 00:11:16.593 Submit histogram 00:11:16.593 ================ 00:11:16.593 Range in us Cumulative Count 00:11:16.593 10.898 - 10.949: 0.0165% ( 1) 00:11:16.593 11.104 - 11.155: 0.0330% ( 1) 00:11:16.593 12.080 - 12.132: 0.0495% ( 1) 00:11:16.593 12.440 - 12.492: 0.0660% ( 1) 00:11:16.593 12.492 - 12.543: 0.0990% ( 2) 00:11:16.593 12.543 - 12.594: 0.1815% ( 5) 00:11:16.593 12.594 - 12.646: 0.4291% ( 15) 00:11:16.593 12.646 - 12.697: 1.3203% ( 54) 00:11:16.593 12.697 - 12.749: 2.8222% ( 91) 00:11:16.593 12.749 - 12.800: 5.1824% ( 143) 00:11:16.593 12.800 - 12.851: 7.5755% ( 145) 00:11:16.593 12.851 - 12.903: 10.0017% ( 147) 00:11:16.593 12.903 - 12.954: 12.2132% ( 134) 00:11:16.593 12.954 - 13.006: 15.0355% ( 171) 00:11:16.593 13.006 - 13.057: 18.2043% ( 192) 00:11:16.593 13.057 - 13.108: 22.0498% ( 233) 00:11:16.593 13.108 - 13.160: 27.6613% ( 340) 00:11:16.593 13.160 - 13.263: 40.9804% ( 807) 00:11:16.593 13.263 - 13.365: 55.1246% ( 857) 00:11:16.593 13.365 - 13.468: 67.8000% ( 768) 00:11:16.593 13.468 - 13.571: 77.9006% ( 612) 00:11:16.594 13.571 - 13.674: 84.6014% ( 406) 00:11:16.594 13.674 - 13.777: 89.3712% ( 289) 00:11:16.594 13.777 - 13.880: 91.8303% ( 149) 00:11:16.594 13.880 - 13.982: 93.1672% ( 81) 00:11:16.594 13.982 - 14.085: 93.8604% ( 42) 00:11:16.594 14.085 - 14.188: 94.2730% ( 25) 00:11:16.594 14.188 - 14.291: 94.5040% ( 14) 00:11:16.594 14.291 - 14.394: 94.6361% ( 8) 00:11:16.594 14.394 - 14.496: 94.6856% ( 3) 00:11:16.594 14.496 - 14.599: 94.7351% ( 3) 00:11:16.594 14.599 - 14.702: 94.7681% ( 2) 00:11:16.594 14.702 - 14.805: 94.8011% ( 2) 00:11:16.594 14.805 - 14.908: 94.8341% ( 2) 00:11:16.594 14.908 - 15.010: 94.8506% ( 1) 00:11:16.594 15.113 - 15.216: 94.8836% ( 2) 00:11:16.594 15.216 - 15.319: 94.9167% ( 2) 00:11:16.594 15.319 - 15.422: 94.9332% ( 1) 00:11:16.594 15.524 - 15.627: 94.9497% ( 1) 00:11:16.594 15.627 - 15.730: 94.9662% ( 1) 00:11:16.594 15.730 - 15.833: 94.9827% ( 1) 00:11:16.594 15.833 - 15.936: 94.9992% ( 1) 00:11:16.594 15.936 - 16.039: 95.0157% ( 1) 00:11:16.594 16.039 - 16.141: 95.0322% ( 1) 00:11:16.594 16.141 - 16.244: 95.0487% ( 1) 00:11:16.594 16.347 - 16.450: 95.0817% ( 2) 00:11:16.594 16.450 - 16.553: 95.0982% ( 1) 00:11:16.594 16.553 - 16.655: 95.1147% ( 1) 00:11:16.594 16.655 - 16.758: 95.1312% ( 1) 00:11:16.594 16.964 - 17.067: 95.2137% ( 5) 00:11:16.594 17.067 - 17.169: 95.2797% ( 4) 00:11:16.594 17.169 - 17.272: 95.4448% ( 10) 00:11:16.594 17.272 - 17.375: 95.5603% ( 7) 00:11:16.594 17.375 - 17.478: 95.7089% ( 9) 00:11:16.594 17.478 - 17.581: 95.8739% ( 10) 00:11:16.594 17.581 - 17.684: 96.0224% ( 9) 00:11:16.594 17.684 - 17.786: 96.2040% ( 11) 00:11:16.594 17.786 - 17.889: 96.4351% ( 14) 00:11:16.594 17.889 - 17.992: 96.6991% ( 16) 00:11:16.594 17.992 - 18.095: 96.7982% ( 6) 00:11:16.594 18.095 - 18.198: 96.9962% ( 12) 00:11:16.594 18.198 - 18.300: 97.2768% ( 17) 00:11:16.594 18.300 - 18.403: 97.4913% ( 13) 00:11:16.594 18.403 - 18.506: 97.6894% ( 12) 00:11:16.594 18.506 - 18.609: 97.8214% ( 8) 00:11:16.594 18.609 - 18.712: 97.9370% ( 7) 00:11:16.594 18.712 - 18.814: 98.1680% ( 14) 00:11:16.594 18.814 - 18.917: 98.3166% ( 9) 00:11:16.594 18.917 - 19.020: 98.4486% ( 8) 00:11:16.594 19.020 - 19.123: 98.5641% ( 7) 00:11:16.594 19.123 - 19.226: 98.5971% ( 2) 00:11:16.594 19.226 - 19.329: 98.6136% ( 1) 00:11:16.594 19.329 - 19.431: 98.6962% ( 5) 00:11:16.594 19.431 - 19.534: 98.7787% ( 5) 00:11:16.594 19.534 - 19.637: 98.8447% ( 4) 00:11:16.594 19.637 - 19.740: 98.8942% ( 3) 00:11:16.594 19.740 - 19.843: 98.9932% ( 6) 00:11:16.594 19.843 - 19.945: 99.0427% ( 3) 00:11:16.594 19.945 - 20.048: 99.0758% ( 2) 00:11:16.594 20.048 - 20.151: 99.0923% ( 1) 00:11:16.594 20.151 - 20.254: 99.1088% ( 1) 00:11:16.594 20.254 - 20.357: 99.1253% ( 1) 00:11:16.594 20.357 - 20.459: 99.1583% ( 2) 00:11:16.594 20.459 - 20.562: 99.2078% ( 3) 00:11:16.594 20.562 - 20.665: 99.2573% ( 3) 00:11:16.594 20.665 - 20.768: 99.3068% ( 3) 00:11:16.594 20.768 - 20.871: 99.3398% ( 2) 00:11:16.594 20.973 - 21.076: 99.3893% ( 3) 00:11:16.594 21.179 - 21.282: 99.4058% ( 1) 00:11:16.594 21.488 - 21.590: 99.4223% ( 1) 00:11:16.594 21.693 - 21.796: 99.4389% ( 1) 00:11:16.594 21.796 - 21.899: 99.4554% ( 1) 00:11:16.594 21.899 - 22.002: 99.4719% ( 1) 00:11:16.594 22.002 - 22.104: 99.4884% ( 1) 00:11:16.594 22.207 - 22.310: 99.5049% ( 1) 00:11:16.594 22.413 - 22.516: 99.5214% ( 1) 00:11:16.594 22.516 - 22.618: 99.5379% ( 1) 00:11:16.594 22.721 - 22.824: 99.5709% ( 2) 00:11:16.594 22.927 - 23.030: 99.6039% ( 2) 00:11:16.594 23.030 - 23.133: 99.6204% ( 1) 00:11:16.594 23.647 - 23.749: 99.6369% ( 1) 00:11:16.594 23.749 - 23.852: 99.6534% ( 1) 00:11:16.594 23.955 - 24.058: 99.6699% ( 1) 00:11:16.594 24.058 - 24.161: 99.6864% ( 1) 00:11:16.594 24.161 - 24.263: 99.7029% ( 1) 00:11:16.594 25.189 - 25.292: 99.7194% ( 1) 00:11:16.594 25.600 - 25.703: 99.7524% ( 2) 00:11:16.594 26.011 - 26.114: 99.7689% ( 1) 00:11:16.594 26.217 - 26.320: 99.7854% ( 1) 00:11:16.594 26.525 - 26.731: 99.8515% ( 4) 00:11:16.594 26.731 - 26.937: 99.8680% ( 1) 00:11:16.594 30.638 - 30.843: 99.8845% ( 1) 00:11:16.594 33.516 - 33.722: 99.9010% ( 1) 00:11:16.594 34.750 - 34.956: 99.9175% ( 1) 00:11:16.594 38.657 - 38.863: 99.9340% ( 1) 00:11:16.594 39.068 - 39.274: 99.9505% ( 1) 00:11:16.594 45.443 - 45.648: 99.9670% ( 1) 00:11:16.594 66.622 - 67.033: 99.9835% ( 1) 00:11:16.594 97.054 - 97.465: 100.0000% ( 1) 00:11:16.594 00:11:16.594 Complete histogram 00:11:16.594 ================== 00:11:16.594 Range in us Cumulative Count 00:11:16.594 7.762 - 7.814: 0.0165% ( 1) 00:11:16.594 7.814 - 7.865: 0.9903% ( 59) 00:11:16.594 7.865 - 7.916: 6.0901% ( 309) 00:11:16.594 7.916 - 7.968: 16.5869% ( 636) 00:11:16.594 7.968 - 8.019: 30.2525% ( 828) 00:11:16.594 8.019 - 8.071: 42.6968% ( 754) 00:11:16.594 8.071 - 8.122: 53.1276% ( 632) 00:11:16.594 8.122 - 8.173: 60.7361% ( 461) 00:11:16.594 8.173 - 8.225: 65.7204% ( 302) 00:11:16.594 8.225 - 8.276: 69.0213% ( 200) 00:11:16.594 8.276 - 8.328: 70.6552% ( 99) 00:11:16.594 8.328 - 8.379: 71.6950% ( 63) 00:11:16.594 8.379 - 8.431: 72.4212% ( 44) 00:11:16.594 8.431 - 8.482: 72.9163% ( 30) 00:11:16.594 8.482 - 8.533: 73.2959% ( 23) 00:11:16.594 8.533 - 8.585: 73.7746% ( 29) 00:11:16.594 8.585 - 8.636: 74.2037% ( 26) 00:11:16.594 8.636 - 8.688: 75.0619% ( 52) 00:11:16.594 8.688 - 8.739: 75.6726% ( 37) 00:11:16.594 8.739 - 8.790: 76.6628% ( 60) 00:11:16.594 8.790 - 8.842: 78.9074% ( 136) 00:11:16.594 8.842 - 8.893: 81.3501% ( 148) 00:11:16.594 8.893 - 8.945: 83.4956% ( 130) 00:11:16.594 8.945 - 8.996: 85.4762% ( 120) 00:11:16.594 8.996 - 9.047: 87.6217% ( 130) 00:11:16.594 9.047 - 9.099: 89.5527% ( 117) 00:11:16.594 9.099 - 9.150: 91.3187% ( 107) 00:11:16.594 9.150 - 9.202: 92.6225% ( 79) 00:11:16.594 9.202 - 9.253: 93.6293% ( 61) 00:11:16.594 9.253 - 9.304: 94.7681% ( 69) 00:11:16.594 9.304 - 9.356: 95.6593% ( 54) 00:11:16.594 9.356 - 9.407: 96.3855% ( 44) 00:11:16.594 9.407 - 9.459: 96.7982% ( 25) 00:11:16.594 9.459 - 9.510: 97.1447% ( 21) 00:11:16.594 9.510 - 9.561: 97.4418% ( 18) 00:11:16.594 9.561 - 9.613: 97.6069% ( 10) 00:11:16.594 9.613 - 9.664: 97.7059% ( 6) 00:11:16.594 9.664 - 9.716: 97.7884% ( 5) 00:11:16.594 9.716 - 9.767: 97.9700% ( 11) 00:11:16.594 9.767 - 9.818: 98.0360% ( 4) 00:11:16.594 9.818 - 9.870: 98.0690% ( 2) 00:11:16.594 9.870 - 9.921: 98.1185% ( 3) 00:11:16.594 9.921 - 9.973: 98.1350% ( 1) 00:11:16.594 9.973 - 10.024: 98.2010% ( 4) 00:11:16.594 10.024 - 10.076: 98.2340% ( 2) 00:11:16.594 10.076 - 10.127: 98.2505% ( 1) 00:11:16.594 10.127 - 10.178: 98.3000% ( 3) 00:11:16.594 10.230 - 10.281: 98.3166% ( 1) 00:11:16.594 10.281 - 10.333: 98.3496% ( 2) 00:11:16.594 10.384 - 10.435: 98.3661% ( 1) 00:11:16.594 10.590 - 10.641: 98.3826% ( 1) 00:11:16.594 10.744 - 10.795: 98.3991% ( 1) 00:11:16.594 10.898 - 10.949: 98.4156% ( 1) 00:11:16.594 11.001 - 11.052: 98.4321% ( 1) 00:11:16.594 11.258 - 11.309: 98.4486% ( 1) 00:11:16.594 11.618 - 11.669: 98.4651% ( 1) 00:11:16.594 11.669 - 11.720: 98.4816% ( 1) 00:11:16.594 11.978 - 12.029: 98.4981% ( 1) 00:11:16.594 12.389 - 12.440: 98.5146% ( 1) 00:11:16.594 12.543 - 12.594: 98.5476% ( 2) 00:11:16.594 12.646 - 12.697: 98.5641% ( 1) 00:11:16.594 12.954 - 13.006: 98.5806% ( 1) 00:11:16.594 13.108 - 13.160: 98.6136% ( 2) 00:11:16.594 13.160 - 13.263: 98.6466% ( 2) 00:11:16.594 13.263 - 13.365: 98.6962% ( 3) 00:11:16.594 13.365 - 13.468: 98.7457% ( 3) 00:11:16.594 13.468 - 13.571: 98.8942% ( 9) 00:11:16.594 13.571 - 13.674: 99.0097% ( 7) 00:11:16.594 13.674 - 13.777: 99.0593% ( 3) 00:11:16.594 13.777 - 13.880: 99.1253% ( 4) 00:11:16.594 13.880 - 13.982: 99.1913% ( 4) 00:11:16.594 13.982 - 14.085: 99.2243% ( 2) 00:11:16.594 14.085 - 14.188: 99.2573% ( 2) 00:11:16.594 14.188 - 14.291: 99.2903% ( 2) 00:11:16.594 14.291 - 14.394: 99.3068% ( 1) 00:11:16.594 14.394 - 14.496: 99.3563% ( 3) 00:11:16.594 14.496 - 14.599: 99.4223% ( 4) 00:11:16.594 14.599 - 14.702: 99.4554% ( 2) 00:11:16.594 14.702 - 14.805: 99.4884% ( 2) 00:11:16.594 14.805 - 14.908: 99.5214% ( 2) 00:11:16.594 14.908 - 15.010: 99.5379% ( 1) 00:11:16.594 15.010 - 15.113: 99.5709% ( 2) 00:11:16.594 15.216 - 15.319: 99.5874% ( 1) 00:11:16.594 15.319 - 15.422: 99.6039% ( 1) 00:11:16.594 15.524 - 15.627: 99.6204% ( 1) 00:11:16.594 16.450 - 16.553: 99.6369% ( 1) 00:11:16.594 16.861 - 16.964: 99.6534% ( 1) 00:11:16.594 18.506 - 18.609: 99.6699% ( 1) 00:11:16.594 18.917 - 19.020: 99.6864% ( 1) 00:11:16.595 19.123 - 19.226: 99.7029% ( 1) 00:11:16.595 19.740 - 19.843: 99.7194% ( 1) 00:11:16.595 20.254 - 20.357: 99.7359% ( 1) 00:11:16.595 20.459 - 20.562: 99.7524% ( 1) 00:11:16.595 20.973 - 21.076: 99.7689% ( 1) 00:11:16.595 21.076 - 21.179: 99.7854% ( 1) 00:11:16.595 22.824 - 22.927: 99.8019% ( 1) 00:11:16.595 23.133 - 23.235: 99.8185% ( 1) 00:11:16.595 23.544 - 23.647: 99.8350% ( 1) 00:11:16.595 23.852 - 23.955: 99.8515% ( 1) 00:11:16.595 24.058 - 24.161: 99.8680% ( 1) 00:11:16.595 24.366 - 24.469: 99.8845% ( 1) 00:11:16.595 24.880 - 24.983: 99.9010% ( 1) 00:11:16.595 24.983 - 25.086: 99.9175% ( 1) 00:11:16.595 25.086 - 25.189: 99.9340% ( 1) 00:11:16.595 27.348 - 27.553: 99.9505% ( 1) 00:11:16.595 27.965 - 28.170: 99.9670% ( 1) 00:11:16.595 28.787 - 28.993: 99.9835% ( 1) 00:11:16.595 54.284 - 54.696: 100.0000% ( 1) 00:11:16.595 00:11:16.595 00:11:16.595 real 0m1.307s 00:11:16.595 user 0m1.102s 00:11:16.595 sys 0m0.156s 00:11:16.595 ************************************ 00:11:16.595 END TEST nvme_overhead 00:11:16.595 ************************************ 00:11:16.595 11:18:43 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.595 11:18:43 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:11:16.595 11:18:43 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:16.595 11:18:43 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:16.595 11:18:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.595 11:18:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:16.595 ************************************ 00:11:16.595 START TEST nvme_arbitration 00:11:16.595 ************************************ 00:11:16.595 11:18:43 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:19.884 Initializing NVMe Controllers 00:11:19.884 Attached to 0000:00:10.0 00:11:19.884 Attached to 0000:00:11.0 00:11:19.884 Attached to 0000:00:13.0 00:11:19.884 Attached to 0000:00:12.0 00:11:19.884 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:19.884 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:19.884 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:19.884 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:19.884 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:19.884 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:19.884 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:19.884 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:19.884 Initialization complete. Launching workers. 00:11:19.884 Starting thread on core 1 with urgent priority queue 00:11:19.884 Starting thread on core 2 with urgent priority queue 00:11:19.884 Starting thread on core 3 with urgent priority queue 00:11:19.884 Starting thread on core 0 with urgent priority queue 00:11:19.884 QEMU NVMe Ctrl (12340 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:11:19.884 QEMU NVMe Ctrl (12342 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:11:19.884 QEMU NVMe Ctrl (12341 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:11:19.884 QEMU NVMe Ctrl (12342 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:11:19.884 QEMU NVMe Ctrl (12343 ) core 2: 576.00 IO/s 173.61 secs/100000 ios 00:11:19.884 QEMU NVMe Ctrl (12342 ) core 3: 597.33 IO/s 167.41 secs/100000 ios 00:11:19.884 ======================================================== 00:11:19.884 00:11:19.884 00:11:19.884 real 0m3.432s 00:11:19.884 user 0m9.359s 00:11:19.884 sys 0m0.165s 00:11:19.884 11:18:46 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.884 ************************************ 00:11:19.884 END TEST nvme_arbitration 00:11:19.884 ************************************ 00:11:19.884 11:18:46 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:11:19.884 11:18:46 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:19.884 11:18:46 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:19.884 11:18:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.884 11:18:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:19.884 ************************************ 00:11:19.884 START TEST nvme_single_aen 00:11:19.884 ************************************ 00:11:19.884 11:18:46 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:20.143 Asynchronous Event Request test 00:11:20.143 Attached to 0000:00:10.0 00:11:20.143 Attached to 0000:00:11.0 00:11:20.143 Attached to 0000:00:13.0 00:11:20.143 Attached to 0000:00:12.0 00:11:20.143 Reset controller to setup AER completions for this process 00:11:20.143 Registering asynchronous event callbacks... 00:11:20.143 Getting orig temperature thresholds of all controllers 00:11:20.143 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:20.143 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:20.143 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:20.143 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:20.143 Setting all controllers temperature threshold low to trigger AER 00:11:20.143 Waiting for all controllers temperature threshold to be set lower 00:11:20.143 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:20.143 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:20.144 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:20.144 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:20.144 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:20.144 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:20.144 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:20.144 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:20.144 Waiting for all controllers to trigger AER and reset threshold 00:11:20.144 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:20.144 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:20.144 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:20.144 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:20.144 Cleaning up... 00:11:20.403 00:11:20.403 real 0m0.299s 00:11:20.403 user 0m0.118s 00:11:20.403 sys 0m0.140s 00:11:20.403 ************************************ 00:11:20.403 END TEST nvme_single_aen 00:11:20.403 ************************************ 00:11:20.403 11:18:47 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.403 11:18:47 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:11:20.403 11:18:47 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:20.403 11:18:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:20.403 11:18:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.403 11:18:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:20.403 ************************************ 00:11:20.403 START TEST nvme_doorbell_aers 00:11:20.403 ************************************ 00:11:20.403 11:18:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:11:20.403 11:18:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:11:20.403 11:18:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:20.403 11:18:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:20.403 11:18:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:20.403 11:18:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:20.403 11:18:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:11:20.403 11:18:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:20.403 11:18:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:20.403 11:18:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:20.403 11:18:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:20.403 11:18:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:20.403 11:18:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:20.403 11:18:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:20.662 [2024-12-10 11:18:47.742944] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:11:30.642 Executing: test_write_invalid_db 00:11:30.642 Waiting for AER completion... 00:11:30.642 Failure: test_write_invalid_db 00:11:30.642 00:11:30.642 Executing: test_invalid_db_write_overflow_sq 00:11:30.642 Waiting for AER completion... 00:11:30.642 Failure: test_invalid_db_write_overflow_sq 00:11:30.642 00:11:30.642 Executing: test_invalid_db_write_overflow_cq 00:11:30.642 Waiting for AER completion... 00:11:30.642 Failure: test_invalid_db_write_overflow_cq 00:11:30.642 00:11:30.642 11:18:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:30.642 11:18:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:30.901 [2024-12-10 11:18:57.793178] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:11:40.914 Executing: test_write_invalid_db 00:11:40.914 Waiting for AER completion... 00:11:40.914 Failure: test_write_invalid_db 00:11:40.914 00:11:40.914 Executing: test_invalid_db_write_overflow_sq 00:11:40.914 Waiting for AER completion... 00:11:40.914 Failure: test_invalid_db_write_overflow_sq 00:11:40.914 00:11:40.914 Executing: test_invalid_db_write_overflow_cq 00:11:40.914 Waiting for AER completion... 00:11:40.914 Failure: test_invalid_db_write_overflow_cq 00:11:40.914 00:11:40.914 11:19:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:40.914 11:19:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:40.914 [2024-12-10 11:19:07.851183] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:11:50.893 Executing: test_write_invalid_db 00:11:50.893 Waiting for AER completion... 00:11:50.893 Failure: test_write_invalid_db 00:11:50.893 00:11:50.893 Executing: test_invalid_db_write_overflow_sq 00:11:50.893 Waiting for AER completion... 00:11:50.893 Failure: test_invalid_db_write_overflow_sq 00:11:50.893 00:11:50.893 Executing: test_invalid_db_write_overflow_cq 00:11:50.893 Waiting for AER completion... 00:11:50.893 Failure: test_invalid_db_write_overflow_cq 00:11:50.893 00:11:50.893 11:19:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:50.893 11:19:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:50.893 [2024-12-10 11:19:17.933466] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:12:00.892 Executing: test_write_invalid_db 00:12:00.892 Waiting for AER completion... 00:12:00.892 Failure: test_write_invalid_db 00:12:00.892 00:12:00.892 Executing: test_invalid_db_write_overflow_sq 00:12:00.892 Waiting for AER completion... 00:12:00.892 Failure: test_invalid_db_write_overflow_sq 00:12:00.892 00:12:00.892 Executing: test_invalid_db_write_overflow_cq 00:12:00.892 Waiting for AER completion... 00:12:00.892 Failure: test_invalid_db_write_overflow_cq 00:12:00.892 00:12:00.892 ************************************ 00:12:00.892 END TEST nvme_doorbell_aers 00:12:00.892 ************************************ 00:12:00.892 00:12:00.892 real 0m40.327s 00:12:00.892 user 0m28.735s 00:12:00.892 sys 0m11.239s 00:12:00.892 11:19:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.892 11:19:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:12:00.892 11:19:27 nvme -- nvme/nvme.sh@97 -- # uname 00:12:00.892 11:19:27 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:00.892 11:19:27 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:00.892 11:19:27 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:00.892 11:19:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.892 11:19:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:00.892 ************************************ 00:12:00.892 START TEST nvme_multi_aen 00:12:00.892 ************************************ 00:12:00.892 11:19:27 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:00.892 [2024-12-10 11:19:27.987881] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:12:00.892 [2024-12-10 11:19:27.987976] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:12:00.892 [2024-12-10 11:19:27.988009] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:12:00.892 [2024-12-10 11:19:27.989878] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:12:00.892 [2024-12-10 11:19:27.989933] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:12:00.892 [2024-12-10 11:19:27.989949] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:12:01.151 [2024-12-10 11:19:27.991247] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:12:01.151 [2024-12-10 11:19:27.991289] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:12:01.151 [2024-12-10 11:19:27.991304] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:12:01.151 [2024-12-10 11:19:27.992626] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:12:01.151 [2024-12-10 11:19:27.992671] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:12:01.151 [2024-12-10 11:19:27.992685] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64592) is not found. Dropping the request. 00:12:01.151 Child process pid: 65113 00:12:01.410 [Child] Asynchronous Event Request test 00:12:01.410 [Child] Attached to 0000:00:10.0 00:12:01.410 [Child] Attached to 0000:00:11.0 00:12:01.410 [Child] Attached to 0000:00:13.0 00:12:01.410 [Child] Attached to 0000:00:12.0 00:12:01.410 [Child] Registering asynchronous event callbacks... 00:12:01.410 [Child] Getting orig temperature thresholds of all controllers 00:12:01.410 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:01.410 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:01.410 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:01.410 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:01.410 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:01.410 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:01.410 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:01.410 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:01.410 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:01.410 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:01.410 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:01.410 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:01.410 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:01.410 [Child] Cleaning up... 00:12:01.410 Asynchronous Event Request test 00:12:01.410 Attached to 0000:00:10.0 00:12:01.410 Attached to 0000:00:11.0 00:12:01.410 Attached to 0000:00:13.0 00:12:01.410 Attached to 0000:00:12.0 00:12:01.410 Reset controller to setup AER completions for this process 00:12:01.410 Registering asynchronous event callbacks... 00:12:01.410 Getting orig temperature thresholds of all controllers 00:12:01.410 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:01.410 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:01.410 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:01.410 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:01.410 Setting all controllers temperature threshold low to trigger AER 00:12:01.410 Waiting for all controllers temperature threshold to be set lower 00:12:01.410 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:01.410 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:01.410 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:01.410 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:01.410 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:01.410 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:01.410 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:01.410 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:01.410 Waiting for all controllers to trigger AER and reset threshold 00:12:01.410 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:01.410 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:01.410 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:01.410 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:01.410 Cleaning up... 00:12:01.410 ************************************ 00:12:01.410 END TEST nvme_multi_aen 00:12:01.410 ************************************ 00:12:01.410 00:12:01.410 real 0m0.630s 00:12:01.410 user 0m0.218s 00:12:01.410 sys 0m0.307s 00:12:01.410 11:19:28 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.410 11:19:28 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:12:01.410 11:19:28 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:01.410 11:19:28 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:01.410 11:19:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.410 11:19:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:01.410 ************************************ 00:12:01.410 START TEST nvme_startup 00:12:01.410 ************************************ 00:12:01.410 11:19:28 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:01.669 Initializing NVMe Controllers 00:12:01.669 Attached to 0000:00:10.0 00:12:01.669 Attached to 0000:00:11.0 00:12:01.669 Attached to 0000:00:13.0 00:12:01.669 Attached to 0000:00:12.0 00:12:01.669 Initialization complete. 00:12:01.669 Time used:194373.812 (us). 00:12:01.669 00:12:01.669 real 0m0.297s 00:12:01.669 user 0m0.093s 00:12:01.669 sys 0m0.149s 00:12:01.669 ************************************ 00:12:01.669 END TEST nvme_startup 00:12:01.669 ************************************ 00:12:01.669 11:19:28 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.669 11:19:28 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:12:01.928 11:19:28 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:01.928 11:19:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:01.928 11:19:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.928 11:19:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:01.928 ************************************ 00:12:01.928 START TEST nvme_multi_secondary 00:12:01.928 ************************************ 00:12:01.928 11:19:28 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:12:01.928 11:19:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65169 00:12:01.928 11:19:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:01.928 11:19:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65170 00:12:01.928 11:19:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:01.928 11:19:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:05.215 Initializing NVMe Controllers 00:12:05.215 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:05.215 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:05.215 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:05.215 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:05.215 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:05.215 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:05.215 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:05.215 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:05.215 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:05.215 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:05.215 Initialization complete. Launching workers. 00:12:05.215 ======================================================== 00:12:05.215 Latency(us) 00:12:05.215 Device Information : IOPS MiB/s Average min max 00:12:05.215 PCIE (0000:00:10.0) NSID 1 from core 2: 3305.29 12.91 4838.70 1315.62 11018.08 00:12:05.215 PCIE (0000:00:11.0) NSID 1 from core 2: 3305.29 12.91 4840.25 1310.70 10840.22 00:12:05.215 PCIE (0000:00:13.0) NSID 1 from core 2: 3305.29 12.91 4840.27 1292.18 11273.20 00:12:05.215 PCIE (0000:00:12.0) NSID 1 from core 2: 3305.29 12.91 4840.34 1303.34 10960.96 00:12:05.215 PCIE (0000:00:12.0) NSID 2 from core 2: 3305.29 12.91 4840.52 1240.33 11129.56 00:12:05.215 PCIE (0000:00:12.0) NSID 3 from core 2: 3305.29 12.91 4840.12 1407.91 11487.24 00:12:05.215 ======================================================== 00:12:05.215 Total : 19831.76 77.47 4840.03 1240.33 11487.24 00:12:05.215 00:12:05.474 11:19:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65169 00:12:05.474 Initializing NVMe Controllers 00:12:05.474 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:05.474 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:05.474 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:05.474 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:05.474 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:05.474 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:05.474 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:05.474 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:05.474 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:05.474 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:05.474 Initialization complete. Launching workers. 00:12:05.474 ======================================================== 00:12:05.474 Latency(us) 00:12:05.474 Device Information : IOPS MiB/s Average min max 00:12:05.474 PCIE (0000:00:10.0) NSID 1 from core 1: 4738.59 18.51 3374.01 1445.97 6598.30 00:12:05.474 PCIE (0000:00:11.0) NSID 1 from core 1: 4738.59 18.51 3375.88 1413.40 6005.07 00:12:05.474 PCIE (0000:00:13.0) NSID 1 from core 1: 4738.59 18.51 3376.19 1564.72 6366.83 00:12:05.474 PCIE (0000:00:12.0) NSID 1 from core 1: 4738.59 18.51 3376.22 1555.56 6156.83 00:12:05.474 PCIE (0000:00:12.0) NSID 2 from core 1: 4738.59 18.51 3376.51 1454.79 6530.88 00:12:05.474 PCIE (0000:00:12.0) NSID 3 from core 1: 4738.59 18.51 3376.55 1496.38 6082.36 00:12:05.474 ======================================================== 00:12:05.474 Total : 28431.52 111.06 3375.90 1413.40 6598.30 00:12:05.474 00:12:07.380 Initializing NVMe Controllers 00:12:07.380 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:07.380 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:07.380 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:07.380 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:07.380 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:07.380 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:07.380 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:07.380 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:07.380 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:07.380 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:07.380 Initialization complete. Launching workers. 00:12:07.380 ======================================================== 00:12:07.380 Latency(us) 00:12:07.380 Device Information : IOPS MiB/s Average min max 00:12:07.380 PCIE (0000:00:10.0) NSID 1 from core 0: 8094.63 31.62 1975.06 943.86 6625.94 00:12:07.380 PCIE (0000:00:11.0) NSID 1 from core 0: 8094.63 31.62 1976.13 943.81 6726.23 00:12:07.380 PCIE (0000:00:13.0) NSID 1 from core 0: 8094.63 31.62 1976.11 882.80 6502.97 00:12:07.380 PCIE (0000:00:12.0) NSID 1 from core 0: 8094.63 31.62 1976.07 812.36 6792.42 00:12:07.380 PCIE (0000:00:12.0) NSID 2 from core 0: 8094.63 31.62 1976.04 759.68 6880.07 00:12:07.380 PCIE (0000:00:12.0) NSID 3 from core 0: 8097.83 31.63 1975.23 672.11 6818.26 00:12:07.380 ======================================================== 00:12:07.380 Total : 48570.96 189.73 1975.77 672.11 6880.07 00:12:07.380 00:12:07.380 11:19:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65170 00:12:07.380 11:19:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65240 00:12:07.380 11:19:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:07.380 11:19:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65241 00:12:07.380 11:19:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:07.380 11:19:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:10.668 Initializing NVMe Controllers 00:12:10.668 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:10.668 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:10.668 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:10.668 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:10.668 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:10.668 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:10.668 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:10.668 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:10.668 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:10.668 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:10.668 Initialization complete. Launching workers. 00:12:10.668 ======================================================== 00:12:10.668 Latency(us) 00:12:10.668 Device Information : IOPS MiB/s Average min max 00:12:10.668 PCIE (0000:00:10.0) NSID 1 from core 0: 5165.13 20.18 3095.39 1167.03 6310.41 00:12:10.668 PCIE (0000:00:11.0) NSID 1 from core 0: 5165.13 20.18 3097.39 1189.36 6277.77 00:12:10.668 PCIE (0000:00:13.0) NSID 1 from core 0: 5165.13 20.18 3097.49 1055.72 6232.42 00:12:10.668 PCIE (0000:00:12.0) NSID 1 from core 0: 5165.13 20.18 3097.65 1194.19 5893.62 00:12:10.668 PCIE (0000:00:12.0) NSID 2 from core 0: 5165.13 20.18 3098.31 1151.35 6522.90 00:12:10.668 PCIE (0000:00:12.0) NSID 3 from core 0: 5165.13 20.18 3098.52 1176.09 6329.24 00:12:10.668 ======================================================== 00:12:10.668 Total : 30990.79 121.06 3097.46 1055.72 6522.90 00:12:10.668 00:12:10.668 Initializing NVMe Controllers 00:12:10.668 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:10.668 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:10.668 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:10.668 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:10.668 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:10.668 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:10.668 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:10.668 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:10.668 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:10.668 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:10.668 Initialization complete. Launching workers. 00:12:10.668 ======================================================== 00:12:10.668 Latency(us) 00:12:10.668 Device Information : IOPS MiB/s Average min max 00:12:10.668 PCIE (0000:00:10.0) NSID 1 from core 1: 5001.67 19.54 3196.42 1045.50 6656.71 00:12:10.668 PCIE (0000:00:11.0) NSID 1 from core 1: 5001.67 19.54 3198.33 1081.58 6589.75 00:12:10.668 PCIE (0000:00:13.0) NSID 1 from core 1: 5001.67 19.54 3198.32 1082.75 6359.07 00:12:10.668 PCIE (0000:00:12.0) NSID 1 from core 1: 5001.67 19.54 3198.56 1081.85 5917.24 00:12:10.668 PCIE (0000:00:12.0) NSID 2 from core 1: 5001.67 19.54 3198.52 1050.13 6328.35 00:12:10.668 PCIE (0000:00:12.0) NSID 3 from core 1: 5001.67 19.54 3198.51 1050.80 6819.38 00:12:10.668 ======================================================== 00:12:10.668 Total : 30010.03 117.23 3198.11 1045.50 6819.38 00:12:10.668 00:12:13.205 Initializing NVMe Controllers 00:12:13.205 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:13.205 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:13.205 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:13.205 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:13.205 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:13.205 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:13.205 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:13.205 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:13.205 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:13.205 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:13.205 Initialization complete. Launching workers. 00:12:13.205 ======================================================== 00:12:13.205 Latency(us) 00:12:13.205 Device Information : IOPS MiB/s Average min max 00:12:13.205 PCIE (0000:00:10.0) NSID 1 from core 2: 3063.82 11.97 5220.80 1178.76 11694.02 00:12:13.205 PCIE (0000:00:11.0) NSID 1 from core 2: 3063.82 11.97 5222.05 1201.46 12617.53 00:12:13.205 PCIE (0000:00:13.0) NSID 1 from core 2: 3063.82 11.97 5221.97 1266.76 13226.94 00:12:13.205 PCIE (0000:00:12.0) NSID 1 from core 2: 3063.62 11.97 5221.96 1258.04 12921.40 00:12:13.205 PCIE (0000:00:12.0) NSID 2 from core 2: 3063.82 11.97 5221.78 1237.53 12056.83 00:12:13.205 PCIE (0000:00:12.0) NSID 3 from core 2: 3063.82 11.97 5221.68 1185.34 12246.23 00:12:13.205 ======================================================== 00:12:13.205 Total : 18382.74 71.81 5221.71 1178.76 13226.94 00:12:13.205 00:12:13.205 ************************************ 00:12:13.205 END TEST nvme_multi_secondary 00:12:13.205 ************************************ 00:12:13.205 11:19:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65240 00:12:13.205 11:19:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65241 00:12:13.205 00:12:13.205 real 0m10.998s 00:12:13.205 user 0m18.598s 00:12:13.205 sys 0m1.042s 00:12:13.205 11:19:39 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.205 11:19:39 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:12:13.205 11:19:39 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:13.205 11:19:39 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:12:13.205 11:19:39 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64177 ]] 00:12:13.205 11:19:39 nvme -- common/autotest_common.sh@1094 -- # kill 64177 00:12:13.205 11:19:39 nvme -- common/autotest_common.sh@1095 -- # wait 64177 00:12:13.205 [2024-12-10 11:19:39.874464] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:39.874610] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:39.874693] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:39.874748] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:39.882190] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:39.882306] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:39.882352] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:39.882404] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:39.887582] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:39.887653] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:39.887683] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:39.887714] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:39.892456] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:39.892529] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:39.892558] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:39.892590] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65112) is not found. Dropping the request. 00:12:13.205 [2024-12-10 11:19:40.039463] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:12:13.205 11:19:40 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:12:13.205 11:19:40 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:12:13.205 11:19:40 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:13.205 11:19:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:13.205 11:19:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.205 11:19:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:13.205 ************************************ 00:12:13.205 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:13.205 ************************************ 00:12:13.205 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:13.205 * Looking for test storage... 00:12:13.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:13.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.206 --rc genhtml_branch_coverage=1 00:12:13.206 --rc genhtml_function_coverage=1 00:12:13.206 --rc genhtml_legend=1 00:12:13.206 --rc geninfo_all_blocks=1 00:12:13.206 --rc geninfo_unexecuted_blocks=1 00:12:13.206 00:12:13.206 ' 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:13.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.206 --rc genhtml_branch_coverage=1 00:12:13.206 --rc genhtml_function_coverage=1 00:12:13.206 --rc genhtml_legend=1 00:12:13.206 --rc geninfo_all_blocks=1 00:12:13.206 --rc geninfo_unexecuted_blocks=1 00:12:13.206 00:12:13.206 ' 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:13.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.206 --rc genhtml_branch_coverage=1 00:12:13.206 --rc genhtml_function_coverage=1 00:12:13.206 --rc genhtml_legend=1 00:12:13.206 --rc geninfo_all_blocks=1 00:12:13.206 --rc geninfo_unexecuted_blocks=1 00:12:13.206 00:12:13.206 ' 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:13.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.206 --rc genhtml_branch_coverage=1 00:12:13.206 --rc genhtml_function_coverage=1 00:12:13.206 --rc genhtml_legend=1 00:12:13.206 --rc geninfo_all_blocks=1 00:12:13.206 --rc geninfo_unexecuted_blocks=1 00:12:13.206 00:12:13.206 ' 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:13.206 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:13.465 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:13.465 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:13.465 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:12:13.465 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:12:13.465 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:12:13.465 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:13.465 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65403 00:12:13.465 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:13.465 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65403 00:12:13.465 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65403 ']' 00:12:13.465 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.465 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.465 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.465 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.465 11:19:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:13.465 [2024-12-10 11:19:40.515123] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:12:13.466 [2024-12-10 11:19:40.515242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65403 ] 00:12:13.724 [2024-12-10 11:19:40.738796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.981 [2024-12-10 11:19:40.858781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.981 [2024-12-10 11:19:40.859004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.981 [2024-12-10 11:19:40.859167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.981 [2024-12-10 11:19:40.859182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:14.915 nvme0n1 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_Mmcnv.txt 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:14.915 true 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733829581 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65431 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:14.915 11:19:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:16.855 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:16.855 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.856 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:16.856 [2024-12-10 11:19:43.877103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:16.856 [2024-12-10 11:19:43.877560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:16.856 [2024-12-10 11:19:43.877694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:16.856 [2024-12-10 11:19:43.877821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.856 [2024-12-10 11:19:43.879854] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:16.856 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.856 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65431 00:12:16.856 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65431 00:12:16.856 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65431 00:12:16.856 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:16.856 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:16.856 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:16.856 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.856 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:16.856 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.856 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:16.856 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_Mmcnv.txt 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:17.114 11:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:17.114 11:19:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:17.114 11:19:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:17.114 11:19:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:17.114 11:19:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_Mmcnv.txt 00:12:17.114 11:19:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65403 00:12:17.114 11:19:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65403 ']' 00:12:17.114 11:19:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65403 00:12:17.114 11:19:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:12:17.114 11:19:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.114 11:19:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65403 00:12:17.114 killing process with pid 65403 00:12:17.114 11:19:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.114 11:19:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.114 11:19:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65403' 00:12:17.114 11:19:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65403 00:12:17.114 11:19:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65403 00:12:19.646 11:19:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:19.646 11:19:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:19.646 ************************************ 00:12:19.646 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:19.646 ************************************ 00:12:19.646 00:12:19.646 real 0m6.419s 00:12:19.646 user 0m22.277s 00:12:19.646 sys 0m0.813s 00:12:19.646 11:19:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.646 11:19:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:19.646 11:19:46 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:19.646 11:19:46 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:19.646 11:19:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:19.646 11:19:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.646 11:19:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:19.646 ************************************ 00:12:19.646 START TEST nvme_fio 00:12:19.646 ************************************ 00:12:19.646 11:19:46 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:12:19.646 11:19:46 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:19.646 11:19:46 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:19.646 11:19:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:19.646 11:19:46 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:19.646 11:19:46 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:12:19.646 11:19:46 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:19.646 11:19:46 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:19.646 11:19:46 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:19.646 11:19:46 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:19.646 11:19:46 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:19.646 11:19:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:19.646 11:19:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:19.646 11:19:46 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:19.646 11:19:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:19.646 11:19:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:19.905 11:19:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:19.905 11:19:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:20.164 11:19:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:20.164 11:19:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:20.164 11:19:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:20.424 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:20.424 fio-3.35 00:12:20.424 Starting 1 thread 00:12:24.615 00:12:24.615 test: (groupid=0, jobs=1): err= 0: pid=65582: Tue Dec 10 11:19:50 2024 00:12:24.615 read: IOPS=22.3k, BW=87.2MiB/s (91.4MB/s)(174MiB/2001msec) 00:12:24.615 slat (nsec): min=3803, max=66492, avg=4513.82, stdev=909.35 00:12:24.615 clat (usec): min=241, max=12118, avg=2861.63, stdev=319.41 00:12:24.615 lat (usec): min=245, max=12184, avg=2866.14, stdev=319.73 00:12:24.615 clat percentiles (usec): 00:12:24.615 | 1.00th=[ 2474], 5.00th=[ 2606], 10.00th=[ 2671], 20.00th=[ 2737], 00:12:24.615 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:12:24.615 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3064], 00:12:24.615 | 99.00th=[ 3458], 99.50th=[ 4228], 99.90th=[ 7963], 99.95th=[10159], 00:12:24.615 | 99.99th=[11994] 00:12:24.615 bw ( KiB/s): min=86656, max=90296, per=98.90%, avg=88272.00, stdev=1853.98, samples=3 00:12:24.615 iops : min=21664, max=22574, avg=22068.00, stdev=463.50, samples=3 00:12:24.615 write: IOPS=22.2k, BW=86.6MiB/s (90.8MB/s)(173MiB/2001msec); 0 zone resets 00:12:24.615 slat (nsec): min=3931, max=26801, avg=4707.72, stdev=854.12 00:12:24.615 clat (usec): min=213, max=12035, avg=2867.71, stdev=330.96 00:12:24.615 lat (usec): min=217, max=12053, avg=2872.42, stdev=331.23 00:12:24.615 clat percentiles (usec): 00:12:24.615 | 1.00th=[ 2474], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:12:24.615 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:12:24.615 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3097], 00:12:24.615 | 99.00th=[ 3490], 99.50th=[ 4228], 99.90th=[ 8717], 99.95th=[10290], 00:12:24.615 | 99.99th=[11731] 00:12:24.615 bw ( KiB/s): min=86304, max=91152, per=99.71%, avg=88418.67, stdev=2482.51, samples=3 00:12:24.615 iops : min=21576, max=22788, avg=22104.67, stdev=620.63, samples=3 00:12:24.615 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:24.615 lat (msec) : 2=0.25%, 4=99.10%, 10=0.54%, 20=0.06% 00:12:24.615 cpu : usr=99.45%, sys=0.10%, ctx=2, majf=0, minf=609 00:12:24.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:24.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:24.615 issued rwts: total=44650,44361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:24.615 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:24.615 00:12:24.615 Run status group 0 (all jobs): 00:12:24.615 READ: bw=87.2MiB/s (91.4MB/s), 87.2MiB/s-87.2MiB/s (91.4MB/s-91.4MB/s), io=174MiB (183MB), run=2001-2001msec 00:12:24.615 WRITE: bw=86.6MiB/s (90.8MB/s), 86.6MiB/s-86.6MiB/s (90.8MB/s-90.8MB/s), io=173MiB (182MB), run=2001-2001msec 00:12:24.615 ----------------------------------------------------- 00:12:24.615 Suppressions used: 00:12:24.615 count bytes template 00:12:24.615 1 32 /usr/src/fio/parse.c 00:12:24.615 1 8 libtcmalloc_minimal.so 00:12:24.615 ----------------------------------------------------- 00:12:24.615 00:12:24.615 11:19:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:24.615 11:19:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:24.615 11:19:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:24.615 11:19:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:24.615 11:19:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:24.615 11:19:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:24.874 11:19:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:24.874 11:19:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:24.874 11:19:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:24.874 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:24.874 fio-3.35 00:12:24.874 Starting 1 thread 00:12:29.063 00:12:29.063 test: (groupid=0, jobs=1): err= 0: pid=65648: Tue Dec 10 11:19:55 2024 00:12:29.063 read: IOPS=22.5k, BW=87.9MiB/s (92.2MB/s)(176MiB/2001msec) 00:12:29.063 slat (nsec): min=3712, max=66638, avg=4489.53, stdev=1023.15 00:12:29.063 clat (usec): min=246, max=13070, avg=2834.42, stdev=324.33 00:12:29.063 lat (usec): min=250, max=13136, avg=2838.91, stdev=324.71 00:12:29.063 clat percentiles (usec): 00:12:29.063 | 1.00th=[ 2507], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:12:29.063 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2835], 00:12:29.063 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 2999], 00:12:29.063 | 99.00th=[ 3392], 99.50th=[ 4228], 99.90th=[ 7898], 99.95th=[10552], 00:12:29.063 | 99.99th=[12649] 00:12:29.063 bw ( KiB/s): min=87080, max=90536, per=99.13%, avg=89264.00, stdev=1899.95, samples=3 00:12:29.063 iops : min=21770, max=22634, avg=22316.00, stdev=474.99, samples=3 00:12:29.063 write: IOPS=22.4k, BW=87.4MiB/s (91.7MB/s)(175MiB/2001msec); 0 zone resets 00:12:29.063 slat (nsec): min=3891, max=57832, avg=4710.91, stdev=1103.98 00:12:29.063 clat (usec): min=190, max=12770, avg=2841.33, stdev=336.87 00:12:29.063 lat (usec): min=195, max=12791, avg=2846.04, stdev=337.25 00:12:29.063 clat percentiles (usec): 00:12:29.063 | 1.00th=[ 2507], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2737], 00:12:29.063 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2835], 00:12:29.063 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3032], 00:12:29.063 | 99.00th=[ 3425], 99.50th=[ 4293], 99.90th=[ 8848], 99.95th=[10814], 00:12:29.063 | 99.99th=[12518] 00:12:29.063 bw ( KiB/s): min=86784, max=91520, per=99.89%, avg=89434.67, stdev=2418.08, samples=3 00:12:29.063 iops : min=21696, max=22880, avg=22358.67, stdev=604.52, samples=3 00:12:29.063 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:12:29.063 lat (msec) : 2=0.22%, 4=99.09%, 10=0.57%, 20=0.06% 00:12:29.063 cpu : usr=99.40%, sys=0.05%, ctx=3, majf=0, minf=608 00:12:29.063 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:29.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:29.063 issued rwts: total=45048,44787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.063 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:29.063 00:12:29.063 Run status group 0 (all jobs): 00:12:29.063 READ: bw=87.9MiB/s (92.2MB/s), 87.9MiB/s-87.9MiB/s (92.2MB/s-92.2MB/s), io=176MiB (185MB), run=2001-2001msec 00:12:29.063 WRITE: bw=87.4MiB/s (91.7MB/s), 87.4MiB/s-87.4MiB/s (91.7MB/s-91.7MB/s), io=175MiB (183MB), run=2001-2001msec 00:12:29.063 ----------------------------------------------------- 00:12:29.063 Suppressions used: 00:12:29.063 count bytes template 00:12:29.063 1 32 /usr/src/fio/parse.c 00:12:29.063 1 8 libtcmalloc_minimal.so 00:12:29.063 ----------------------------------------------------- 00:12:29.063 00:12:29.063 11:19:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:29.063 11:19:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:29.063 11:19:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:29.063 11:19:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:29.321 11:19:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:29.321 11:19:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:29.580 11:19:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:29.580 11:19:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:29.580 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:29.580 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:29.580 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:29.580 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:29.580 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:29.580 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:29.580 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:29.580 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:29.581 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:29.581 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:29.581 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:29.581 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:29.581 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:29.581 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:29.581 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:29.581 11:19:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:29.840 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:29.840 fio-3.35 00:12:29.840 Starting 1 thread 00:12:34.048 00:12:34.048 test: (groupid=0, jobs=1): err= 0: pid=65713: Tue Dec 10 11:20:00 2024 00:12:34.048 read: IOPS=22.6k, BW=88.2MiB/s (92.5MB/s)(176MiB/2001msec) 00:12:34.048 slat (nsec): min=3791, max=26586, avg=4541.00, stdev=819.76 00:12:34.048 clat (usec): min=215, max=6802, avg=2830.05, stdev=180.56 00:12:34.048 lat (usec): min=219, max=6813, avg=2834.59, stdev=180.59 00:12:34.048 clat percentiles (usec): 00:12:34.048 | 1.00th=[ 2376], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2737], 00:12:34.048 | 30.00th=[ 2802], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:12:34.048 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 2999], 00:12:34.048 | 99.00th=[ 3163], 99.50th=[ 3392], 99.90th=[ 4817], 99.95th=[ 5604], 00:12:34.048 | 99.99th=[ 6652] 00:12:34.048 bw ( KiB/s): min=88512, max=91400, per=99.57%, avg=89922.67, stdev=1445.15, samples=3 00:12:34.048 iops : min=22128, max=22850, avg=22480.67, stdev=361.29, samples=3 00:12:34.048 write: IOPS=22.5k, BW=87.7MiB/s (92.0MB/s)(176MiB/2001msec); 0 zone resets 00:12:34.049 slat (nsec): min=3829, max=31066, avg=4755.18, stdev=855.91 00:12:34.049 clat (usec): min=189, max=6747, avg=2837.10, stdev=186.81 00:12:34.049 lat (usec): min=194, max=6752, avg=2841.85, stdev=186.83 00:12:34.049 clat percentiles (usec): 00:12:34.049 | 1.00th=[ 2376], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2769], 00:12:34.049 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2835], 60.00th=[ 2868], 00:12:34.049 | 70.00th=[ 2900], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 2999], 00:12:34.049 | 99.00th=[ 3163], 99.50th=[ 3392], 99.90th=[ 5080], 99.95th=[ 5800], 00:12:34.049 | 99.99th=[ 6587] 00:12:34.049 bw ( KiB/s): min=89272, max=90944, per=100.00%, avg=90141.33, stdev=837.99, samples=3 00:12:34.049 iops : min=22318, max=22736, avg=22535.33, stdev=209.50, samples=3 00:12:34.049 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:34.049 lat (msec) : 2=0.27%, 4=99.45%, 10=0.24% 00:12:34.049 cpu : usr=99.45%, sys=0.00%, ctx=5, majf=0, minf=608 00:12:34.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:34.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:34.049 issued rwts: total=45176,44937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:34.049 00:12:34.049 Run status group 0 (all jobs): 00:12:34.049 READ: bw=88.2MiB/s (92.5MB/s), 88.2MiB/s-88.2MiB/s (92.5MB/s-92.5MB/s), io=176MiB (185MB), run=2001-2001msec 00:12:34.049 WRITE: bw=87.7MiB/s (92.0MB/s), 87.7MiB/s-87.7MiB/s (92.0MB/s-92.0MB/s), io=176MiB (184MB), run=2001-2001msec 00:12:34.049 ----------------------------------------------------- 00:12:34.049 Suppressions used: 00:12:34.049 count bytes template 00:12:34.049 1 32 /usr/src/fio/parse.c 00:12:34.049 1 8 libtcmalloc_minimal.so 00:12:34.049 ----------------------------------------------------- 00:12:34.049 00:12:34.049 11:20:00 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:34.049 11:20:00 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:34.049 11:20:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:34.049 11:20:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:34.049 11:20:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:34.049 11:20:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:34.307 11:20:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:34.307 11:20:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:34.307 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:34.307 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:34.307 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:34.307 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:34.307 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:34.307 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:34.307 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:34.307 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:34.307 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:34.307 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:34.307 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:34.566 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:34.566 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:34.566 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:34.566 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:34.566 11:20:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:34.566 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:34.566 fio-3.35 00:12:34.566 Starting 1 thread 00:12:39.836 00:12:39.836 test: (groupid=0, jobs=1): err= 0: pid=65775: Tue Dec 10 11:20:06 2024 00:12:39.836 read: IOPS=22.2k, BW=86.7MiB/s (90.9MB/s)(173MiB/2001msec) 00:12:39.836 slat (nsec): min=3786, max=65976, avg=4512.27, stdev=1134.48 00:12:39.836 clat (usec): min=176, max=14408, avg=2879.25, stdev=427.28 00:12:39.836 lat (usec): min=181, max=14470, avg=2883.76, stdev=427.66 00:12:39.837 clat percentiles (usec): 00:12:39.837 | 1.00th=[ 2114], 5.00th=[ 2573], 10.00th=[ 2671], 20.00th=[ 2737], 00:12:39.837 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:12:39.837 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3130], 00:12:39.837 | 99.00th=[ 4359], 99.50th=[ 4948], 99.90th=[ 8717], 99.95th=[11469], 00:12:39.837 | 99.99th=[13960] 00:12:39.837 bw ( KiB/s): min=83560, max=90072, per=98.02%, avg=86986.67, stdev=3269.39, samples=3 00:12:39.837 iops : min=20890, max=22518, avg=21746.67, stdev=817.35, samples=3 00:12:39.837 write: IOPS=22.0k, BW=86.1MiB/s (90.2MB/s)(172MiB/2001msec); 0 zone resets 00:12:39.837 slat (nsec): min=3859, max=70427, avg=4734.46, stdev=1115.67 00:12:39.837 clat (usec): min=266, max=14094, avg=2884.03, stdev=437.87 00:12:39.837 lat (usec): min=271, max=14117, avg=2888.77, stdev=438.21 00:12:39.837 clat percentiles (usec): 00:12:39.837 | 1.00th=[ 2073], 5.00th=[ 2573], 10.00th=[ 2671], 20.00th=[ 2737], 00:12:39.837 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:12:39.837 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3130], 00:12:39.837 | 99.00th=[ 4359], 99.50th=[ 4948], 99.90th=[ 9765], 99.95th=[11731], 00:12:39.837 | 99.99th=[13698] 00:12:39.837 bw ( KiB/s): min=83512, max=90944, per=98.95%, avg=87200.00, stdev=3716.32, samples=3 00:12:39.837 iops : min=20878, max=22736, avg=21800.00, stdev=929.08, samples=3 00:12:39.837 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:39.837 lat (msec) : 2=0.79%, 4=97.90%, 10=1.18%, 20=0.08% 00:12:39.837 cpu : usr=99.40%, sys=0.05%, ctx=5, majf=0, minf=606 00:12:39.837 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:39.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:39.837 issued rwts: total=44396,44084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.837 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:39.837 00:12:39.837 Run status group 0 (all jobs): 00:12:39.837 READ: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=173MiB (182MB), run=2001-2001msec 00:12:39.837 WRITE: bw=86.1MiB/s (90.2MB/s), 86.1MiB/s-86.1MiB/s (90.2MB/s-90.2MB/s), io=172MiB (181MB), run=2001-2001msec 00:12:39.837 ----------------------------------------------------- 00:12:39.837 Suppressions used: 00:12:39.837 count bytes template 00:12:39.837 1 32 /usr/src/fio/parse.c 00:12:39.837 1 8 libtcmalloc_minimal.so 00:12:39.837 ----------------------------------------------------- 00:12:39.837 00:12:39.837 ************************************ 00:12:39.837 END TEST nvme_fio 00:12:39.837 ************************************ 00:12:39.837 11:20:06 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:39.837 11:20:06 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:12:39.837 00:12:39.837 real 0m20.255s 00:12:39.837 user 0m15.243s 00:12:39.837 sys 0m5.944s 00:12:39.837 11:20:06 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.837 11:20:06 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:12:39.837 ************************************ 00:12:39.837 END TEST nvme 00:12:39.837 ************************************ 00:12:39.837 00:12:39.837 real 1m35.559s 00:12:39.837 user 3m44.148s 00:12:39.837 sys 0m25.001s 00:12:39.837 11:20:06 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.837 11:20:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.837 11:20:06 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:12:39.837 11:20:06 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:39.837 11:20:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:39.837 11:20:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.837 11:20:06 -- common/autotest_common.sh@10 -- # set +x 00:12:39.837 ************************************ 00:12:39.837 START TEST nvme_scc 00:12:39.837 ************************************ 00:12:39.837 11:20:06 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:40.096 * Looking for test storage... 00:12:40.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:40.096 11:20:07 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:40.096 11:20:07 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:40.096 11:20:07 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:40.096 11:20:07 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@345 -- # : 1 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@368 -- # return 0 00:12:40.096 11:20:07 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.096 11:20:07 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:40.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.096 --rc genhtml_branch_coverage=1 00:12:40.096 --rc genhtml_function_coverage=1 00:12:40.096 --rc genhtml_legend=1 00:12:40.096 --rc geninfo_all_blocks=1 00:12:40.096 --rc geninfo_unexecuted_blocks=1 00:12:40.096 00:12:40.096 ' 00:12:40.096 11:20:07 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:40.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.096 --rc genhtml_branch_coverage=1 00:12:40.096 --rc genhtml_function_coverage=1 00:12:40.096 --rc genhtml_legend=1 00:12:40.096 --rc geninfo_all_blocks=1 00:12:40.096 --rc geninfo_unexecuted_blocks=1 00:12:40.096 00:12:40.096 ' 00:12:40.096 11:20:07 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:40.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.096 --rc genhtml_branch_coverage=1 00:12:40.096 --rc genhtml_function_coverage=1 00:12:40.096 --rc genhtml_legend=1 00:12:40.096 --rc geninfo_all_blocks=1 00:12:40.096 --rc geninfo_unexecuted_blocks=1 00:12:40.096 00:12:40.096 ' 00:12:40.096 11:20:07 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:40.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.096 --rc genhtml_branch_coverage=1 00:12:40.096 --rc genhtml_function_coverage=1 00:12:40.096 --rc genhtml_legend=1 00:12:40.096 --rc geninfo_all_blocks=1 00:12:40.096 --rc geninfo_unexecuted_blocks=1 00:12:40.096 00:12:40.096 ' 00:12:40.096 11:20:07 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:40.096 11:20:07 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:40.096 11:20:07 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:40.096 11:20:07 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:40.096 11:20:07 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.096 11:20:07 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.096 11:20:07 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.096 11:20:07 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.096 11:20:07 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.096 11:20:07 nvme_scc -- paths/export.sh@5 -- # export PATH 00:12:40.096 11:20:07 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.096 11:20:07 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:12:40.096 11:20:07 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:40.096 11:20:07 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:12:40.096 11:20:07 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:40.096 11:20:07 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:12:40.096 11:20:07 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:40.096 11:20:07 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:40.096 11:20:07 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:40.096 11:20:07 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:12:40.096 11:20:07 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:40.096 11:20:07 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:12:40.096 11:20:07 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:40.096 11:20:07 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:40.096 11:20:07 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:40.663 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:40.922 Waiting for block devices as requested 00:12:40.922 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:41.181 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:41.181 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:41.440 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:46.720 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:46.720 11:20:13 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:46.720 11:20:13 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:46.720 11:20:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:46.720 11:20:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:46.720 11:20:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:46.721 11:20:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:46.721 11:20:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:46.721 11:20:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:46.721 11:20:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.721 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.722 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.723 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.724 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:12:46.725 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:46.726 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.727 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:46.728 11:20:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:46.728 11:20:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:46.728 11:20:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:46.728 11:20:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.728 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:46.729 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.730 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:12:46.731 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.732 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:46.733 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:46.734 11:20:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:46.734 11:20:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:46.734 11:20:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:46.734 11:20:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.734 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:46.735 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:46.736 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.002 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:12:47.003 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:47.004 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.005 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:12:47.006 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:47.007 11:20:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.008 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.009 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:47.010 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:47.011 11:20:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:47.011 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:47.012 11:20:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:47.012 11:20:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:47.012 11:20:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:47.012 11:20:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:47.012 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.013 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:47.014 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:47.015 11:20:14 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:47.015 11:20:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:12:47.016 11:20:14 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:12:47.016 11:20:14 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:12:47.016 11:20:14 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:12:47.274 11:20:14 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:47.274 11:20:14 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:12:47.274 11:20:14 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:47.841 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:48.777 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:48.777 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:48.777 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:48.777 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:48.777 11:20:15 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:48.777 11:20:15 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:48.777 11:20:15 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.777 11:20:15 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:48.777 ************************************ 00:12:48.777 START TEST nvme_simple_copy 00:12:48.777 ************************************ 00:12:48.777 11:20:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:49.036 Initializing NVMe Controllers 00:12:49.036 Attaching to 0000:00:10.0 00:12:49.036 Controller supports SCC. Attached to 0000:00:10.0 00:12:49.036 Namespace ID: 1 size: 6GB 00:12:49.036 Initialization complete. 00:12:49.036 00:12:49.036 Controller QEMU NVMe Ctrl (12340 ) 00:12:49.036 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:12:49.036 Namespace Block Size:4096 00:12:49.036 Writing LBAs 0 to 63 with Random Data 00:12:49.036 Copied LBAs from 0 - 63 to the Destination LBA 256 00:12:49.036 LBAs matching Written Data: 64 00:12:49.036 00:12:49.036 real 0m0.317s 00:12:49.036 user 0m0.127s 00:12:49.036 sys 0m0.088s 00:12:49.036 11:20:16 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.036 11:20:16 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:12:49.036 ************************************ 00:12:49.036 END TEST nvme_simple_copy 00:12:49.036 ************************************ 00:12:49.036 00:12:49.036 real 0m9.207s 00:12:49.036 user 0m1.604s 00:12:49.036 sys 0m2.441s 00:12:49.036 11:20:16 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.036 11:20:16 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:49.036 ************************************ 00:12:49.036 END TEST nvme_scc 00:12:49.036 ************************************ 00:12:49.296 11:20:16 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:12:49.296 11:20:16 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:12:49.296 11:20:16 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:12:49.296 11:20:16 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:12:49.296 11:20:16 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:12:49.296 11:20:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:49.296 11:20:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.296 11:20:16 -- common/autotest_common.sh@10 -- # set +x 00:12:49.296 ************************************ 00:12:49.296 START TEST nvme_fdp 00:12:49.296 ************************************ 00:12:49.296 11:20:16 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:12:49.296 * Looking for test storage... 00:12:49.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:49.296 11:20:16 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:49.296 11:20:16 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:12:49.296 11:20:16 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:49.556 11:20:16 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:12:49.556 11:20:16 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.556 11:20:16 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:49.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.556 --rc genhtml_branch_coverage=1 00:12:49.556 --rc genhtml_function_coverage=1 00:12:49.556 --rc genhtml_legend=1 00:12:49.556 --rc geninfo_all_blocks=1 00:12:49.556 --rc geninfo_unexecuted_blocks=1 00:12:49.556 00:12:49.556 ' 00:12:49.556 11:20:16 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:49.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.556 --rc genhtml_branch_coverage=1 00:12:49.556 --rc genhtml_function_coverage=1 00:12:49.556 --rc genhtml_legend=1 00:12:49.556 --rc geninfo_all_blocks=1 00:12:49.556 --rc geninfo_unexecuted_blocks=1 00:12:49.556 00:12:49.556 ' 00:12:49.556 11:20:16 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:49.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.556 --rc genhtml_branch_coverage=1 00:12:49.556 --rc genhtml_function_coverage=1 00:12:49.556 --rc genhtml_legend=1 00:12:49.556 --rc geninfo_all_blocks=1 00:12:49.556 --rc geninfo_unexecuted_blocks=1 00:12:49.556 00:12:49.556 ' 00:12:49.556 11:20:16 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:49.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.556 --rc genhtml_branch_coverage=1 00:12:49.556 --rc genhtml_function_coverage=1 00:12:49.556 --rc genhtml_legend=1 00:12:49.556 --rc geninfo_all_blocks=1 00:12:49.556 --rc geninfo_unexecuted_blocks=1 00:12:49.556 00:12:49.556 ' 00:12:49.556 11:20:16 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:49.556 11:20:16 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:49.556 11:20:16 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:49.556 11:20:16 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:49.556 11:20:16 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.556 11:20:16 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.556 11:20:16 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.556 11:20:16 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.556 11:20:16 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.556 11:20:16 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:12:49.556 11:20:16 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.556 11:20:16 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:12:49.556 11:20:16 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:49.556 11:20:16 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:12:49.556 11:20:16 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:49.556 11:20:16 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:12:49.556 11:20:16 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:49.556 11:20:16 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:49.556 11:20:16 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:49.556 11:20:16 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:12:49.556 11:20:16 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:49.556 11:20:16 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:50.125 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:50.383 Waiting for block devices as requested 00:12:50.383 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:50.383 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:50.642 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:50.642 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:55.941 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:55.941 11:20:22 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:55.941 11:20:22 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:55.941 11:20:22 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:55.941 11:20:22 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:55.941 11:20:22 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:55.941 11:20:22 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:55.941 11:20:22 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:55.941 11:20:22 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:55.941 11:20:22 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:55.942 11:20:22 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:55.942 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.943 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.944 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:12:55.945 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.946 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.947 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:55.948 11:20:22 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:55.948 11:20:22 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:55.948 11:20:22 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:55.948 11:20:22 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:55.948 11:20:22 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.949 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.950 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:12:55.951 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.952 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.953 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:55.954 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:55.955 11:20:22 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:55.955 11:20:22 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:55.955 11:20:22 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:55.955 11:20:22 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.955 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.956 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.957 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:55.958 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:12:56.224 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.225 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.226 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:12:56.227 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:56.228 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:56.229 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.230 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:56.231 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.232 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:56.233 11:20:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:56.233 11:20:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:56.233 11:20:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:56.233 11:20:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:56.233 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.234 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.235 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:56.236 11:20:23 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:12:56.236 11:20:23 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:12:56.236 11:20:23 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:12:56.236 11:20:23 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:12:56.236 11:20:23 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:57.174 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:57.742 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:57.742 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:57.742 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:57.742 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:58.002 11:20:24 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:58.002 11:20:24 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:58.002 11:20:24 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.002 11:20:24 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:58.002 ************************************ 00:12:58.002 START TEST nvme_flexible_data_placement 00:12:58.002 ************************************ 00:12:58.002 11:20:24 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:58.261 Initializing NVMe Controllers 00:12:58.261 Attaching to 0000:00:13.0 00:12:58.261 Controller supports FDP Attached to 0000:00:13.0 00:12:58.261 Namespace ID: 1 Endurance Group ID: 1 00:12:58.261 Initialization complete. 00:12:58.261 00:12:58.261 ================================== 00:12:58.261 == FDP tests for Namespace: #01 == 00:12:58.261 ================================== 00:12:58.261 00:12:58.261 Get Feature: FDP: 00:12:58.261 ================= 00:12:58.261 Enabled: Yes 00:12:58.261 FDP configuration Index: 0 00:12:58.261 00:12:58.261 FDP configurations log page 00:12:58.261 =========================== 00:12:58.261 Number of FDP configurations: 1 00:12:58.261 Version: 0 00:12:58.261 Size: 112 00:12:58.261 FDP Configuration Descriptor: 0 00:12:58.261 Descriptor Size: 96 00:12:58.261 Reclaim Group Identifier format: 2 00:12:58.261 FDP Volatile Write Cache: Not Present 00:12:58.261 FDP Configuration: Valid 00:12:58.261 Vendor Specific Size: 0 00:12:58.261 Number of Reclaim Groups: 2 00:12:58.261 Number of Recalim Unit Handles: 8 00:12:58.261 Max Placement Identifiers: 128 00:12:58.261 Number of Namespaces Suppprted: 256 00:12:58.261 Reclaim unit Nominal Size: 6000000 bytes 00:12:58.261 Estimated Reclaim Unit Time Limit: Not Reported 00:12:58.261 RUH Desc #000: RUH Type: Initially Isolated 00:12:58.261 RUH Desc #001: RUH Type: Initially Isolated 00:12:58.261 RUH Desc #002: RUH Type: Initially Isolated 00:12:58.261 RUH Desc #003: RUH Type: Initially Isolated 00:12:58.261 RUH Desc #004: RUH Type: Initially Isolated 00:12:58.261 RUH Desc #005: RUH Type: Initially Isolated 00:12:58.261 RUH Desc #006: RUH Type: Initially Isolated 00:12:58.261 RUH Desc #007: RUH Type: Initially Isolated 00:12:58.261 00:12:58.261 FDP reclaim unit handle usage log page 00:12:58.261 ====================================== 00:12:58.261 Number of Reclaim Unit Handles: 8 00:12:58.261 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:58.261 RUH Usage Desc #001: RUH Attributes: Unused 00:12:58.261 RUH Usage Desc #002: RUH Attributes: Unused 00:12:58.261 RUH Usage Desc #003: RUH Attributes: Unused 00:12:58.261 RUH Usage Desc #004: RUH Attributes: Unused 00:12:58.261 RUH Usage Desc #005: RUH Attributes: Unused 00:12:58.261 RUH Usage Desc #006: RUH Attributes: Unused 00:12:58.261 RUH Usage Desc #007: RUH Attributes: Unused 00:12:58.261 00:12:58.261 FDP statistics log page 00:12:58.261 ======================= 00:12:58.261 Host bytes with metadata written: 938274816 00:12:58.261 Media bytes with metadata written: 938364928 00:12:58.261 Media bytes erased: 0 00:12:58.261 00:12:58.261 FDP Reclaim unit handle status 00:12:58.261 ============================== 00:12:58.261 Number of RUHS descriptors: 2 00:12:58.261 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004131 00:12:58.261 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:58.261 00:12:58.261 FDP write on placement id: 0 success 00:12:58.261 00:12:58.261 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:58.261 00:12:58.261 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:58.261 00:12:58.261 Get Feature: FDP Events for Placement handle: #0 00:12:58.261 ======================== 00:12:58.261 Number of FDP Events: 6 00:12:58.261 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:58.261 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:58.261 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:58.261 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:58.261 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:58.261 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:58.261 00:12:58.261 FDP events log page 00:12:58.261 =================== 00:12:58.261 Number of FDP events: 1 00:12:58.261 FDP Event #0: 00:12:58.261 Event Type: RU Not Written to Capacity 00:12:58.261 Placement Identifier: Valid 00:12:58.261 NSID: Valid 00:12:58.261 Location: Valid 00:12:58.261 Placement Identifier: 0 00:12:58.261 Event Timestamp: 7 00:12:58.261 Namespace Identifier: 1 00:12:58.261 Reclaim Group Identifier: 0 00:12:58.261 Reclaim Unit Handle Identifier: 0 00:12:58.261 00:12:58.261 FDP test passed 00:12:58.261 00:12:58.261 real 0m0.294s 00:12:58.261 user 0m0.095s 00:12:58.261 sys 0m0.098s 00:12:58.261 11:20:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.261 11:20:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:12:58.261 ************************************ 00:12:58.261 END TEST nvme_flexible_data_placement 00:12:58.262 ************************************ 00:12:58.262 00:12:58.262 real 0m9.024s 00:12:58.262 user 0m1.593s 00:12:58.262 sys 0m2.545s 00:12:58.262 11:20:25 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.262 11:20:25 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:58.262 ************************************ 00:12:58.262 END TEST nvme_fdp 00:12:58.262 ************************************ 00:12:58.262 11:20:25 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:12:58.262 11:20:25 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:58.262 11:20:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:58.262 11:20:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.262 11:20:25 -- common/autotest_common.sh@10 -- # set +x 00:12:58.262 ************************************ 00:12:58.262 START TEST nvme_rpc 00:12:58.262 ************************************ 00:12:58.262 11:20:25 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:58.520 * Looking for test storage... 00:12:58.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:58.520 11:20:25 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:58.520 11:20:25 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:58.520 11:20:25 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:58.520 11:20:25 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.521 11:20:25 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:12:58.521 11:20:25 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.521 11:20:25 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:58.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.521 --rc genhtml_branch_coverage=1 00:12:58.521 --rc genhtml_function_coverage=1 00:12:58.521 --rc genhtml_legend=1 00:12:58.521 --rc geninfo_all_blocks=1 00:12:58.521 --rc geninfo_unexecuted_blocks=1 00:12:58.521 00:12:58.521 ' 00:12:58.521 11:20:25 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:58.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.521 --rc genhtml_branch_coverage=1 00:12:58.521 --rc genhtml_function_coverage=1 00:12:58.521 --rc genhtml_legend=1 00:12:58.521 --rc geninfo_all_blocks=1 00:12:58.521 --rc geninfo_unexecuted_blocks=1 00:12:58.521 00:12:58.521 ' 00:12:58.521 11:20:25 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:58.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.521 --rc genhtml_branch_coverage=1 00:12:58.521 --rc genhtml_function_coverage=1 00:12:58.521 --rc genhtml_legend=1 00:12:58.521 --rc geninfo_all_blocks=1 00:12:58.521 --rc geninfo_unexecuted_blocks=1 00:12:58.521 00:12:58.521 ' 00:12:58.521 11:20:25 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:58.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.521 --rc genhtml_branch_coverage=1 00:12:58.521 --rc genhtml_function_coverage=1 00:12:58.521 --rc genhtml_legend=1 00:12:58.521 --rc geninfo_all_blocks=1 00:12:58.521 --rc geninfo_unexecuted_blocks=1 00:12:58.521 00:12:58.521 ' 00:12:58.521 11:20:25 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:58.521 11:20:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:58.521 11:20:25 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:58.521 11:20:25 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:12:58.521 11:20:25 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:58.521 11:20:25 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:58.521 11:20:25 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:58.521 11:20:25 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:12:58.521 11:20:25 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:58.521 11:20:25 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:58.521 11:20:25 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:58.780 11:20:25 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:58.780 11:20:25 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:58.780 11:20:25 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:12:58.780 11:20:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:12:58.780 11:20:25 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67181 00:12:58.780 11:20:25 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:58.780 11:20:25 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:58.780 11:20:25 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67181 00:12:58.780 11:20:25 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67181 ']' 00:12:58.780 11:20:25 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.780 11:20:25 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.780 11:20:25 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.780 11:20:25 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.780 11:20:25 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.780 [2024-12-10 11:20:25.767714] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:12:58.780 [2024-12-10 11:20:25.767843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67181 ] 00:12:59.040 [2024-12-10 11:20:25.951541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:59.040 [2024-12-10 11:20:26.068361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.040 [2024-12-10 11:20:26.068396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.977 11:20:26 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.977 11:20:26 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:59.977 11:20:26 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:13:00.237 Nvme0n1 00:13:00.237 11:20:27 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:00.237 11:20:27 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:00.495 request: 00:13:00.495 { 00:13:00.495 "bdev_name": "Nvme0n1", 00:13:00.495 "filename": "non_existing_file", 00:13:00.495 "method": "bdev_nvme_apply_firmware", 00:13:00.495 "req_id": 1 00:13:00.495 } 00:13:00.495 Got JSON-RPC error response 00:13:00.495 response: 00:13:00.495 { 00:13:00.495 "code": -32603, 00:13:00.495 "message": "open file failed." 00:13:00.495 } 00:13:00.495 11:20:27 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:00.495 11:20:27 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:00.495 11:20:27 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:00.754 11:20:27 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:00.755 11:20:27 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67181 00:13:00.755 11:20:27 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67181 ']' 00:13:00.755 11:20:27 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67181 00:13:00.755 11:20:27 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:00.755 11:20:27 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.755 11:20:27 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67181 00:13:00.755 11:20:27 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.755 killing process with pid 67181 00:13:00.755 11:20:27 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.755 11:20:27 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67181' 00:13:00.755 11:20:27 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67181 00:13:00.755 11:20:27 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67181 00:13:03.289 00:13:03.289 real 0m4.658s 00:13:03.289 user 0m8.463s 00:13:03.289 sys 0m0.795s 00:13:03.289 11:20:29 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.289 ************************************ 00:13:03.289 END TEST nvme_rpc 00:13:03.289 ************************************ 00:13:03.289 11:20:29 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.289 11:20:30 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:03.289 11:20:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:03.289 11:20:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.289 11:20:30 -- common/autotest_common.sh@10 -- # set +x 00:13:03.289 ************************************ 00:13:03.289 START TEST nvme_rpc_timeouts 00:13:03.289 ************************************ 00:13:03.289 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:03.289 * Looking for test storage... 00:13:03.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:03.289 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:03.289 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:13:03.289 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:03.289 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.289 11:20:30 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:13:03.289 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.289 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:03.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.289 --rc genhtml_branch_coverage=1 00:13:03.289 --rc genhtml_function_coverage=1 00:13:03.289 --rc genhtml_legend=1 00:13:03.289 --rc geninfo_all_blocks=1 00:13:03.289 --rc geninfo_unexecuted_blocks=1 00:13:03.289 00:13:03.289 ' 00:13:03.289 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:03.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.289 --rc genhtml_branch_coverage=1 00:13:03.289 --rc genhtml_function_coverage=1 00:13:03.289 --rc genhtml_legend=1 00:13:03.289 --rc geninfo_all_blocks=1 00:13:03.289 --rc geninfo_unexecuted_blocks=1 00:13:03.289 00:13:03.289 ' 00:13:03.289 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:03.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.289 --rc genhtml_branch_coverage=1 00:13:03.289 --rc genhtml_function_coverage=1 00:13:03.289 --rc genhtml_legend=1 00:13:03.289 --rc geninfo_all_blocks=1 00:13:03.289 --rc geninfo_unexecuted_blocks=1 00:13:03.289 00:13:03.289 ' 00:13:03.289 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:03.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.289 --rc genhtml_branch_coverage=1 00:13:03.289 --rc genhtml_function_coverage=1 00:13:03.289 --rc genhtml_legend=1 00:13:03.289 --rc geninfo_all_blocks=1 00:13:03.289 --rc geninfo_unexecuted_blocks=1 00:13:03.289 00:13:03.289 ' 00:13:03.289 11:20:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:03.289 11:20:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67257 00:13:03.289 11:20:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67257 00:13:03.290 11:20:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67289 00:13:03.290 11:20:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:03.290 11:20:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:03.290 11:20:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67289 00:13:03.290 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67289 ']' 00:13:03.290 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.290 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.290 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.290 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.290 11:20:30 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:03.290 [2024-12-10 11:20:30.374471] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:13:03.290 [2024-12-10 11:20:30.375083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67289 ] 00:13:03.549 [2024-12-10 11:20:30.554832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:03.808 [2024-12-10 11:20:30.667866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.808 [2024-12-10 11:20:30.667899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.745 11:20:31 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.745 Checking default timeout settings: 00:13:04.745 11:20:31 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:13:04.745 11:20:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:04.745 11:20:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:05.004 Making settings changes with rpc: 00:13:05.004 11:20:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:05.004 11:20:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:05.004 Check default vs. modified settings: 00:13:05.004 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:05.004 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67257 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67257 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:05.572 Setting action_on_timeout is changed as expected. 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67257 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67257 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:05.572 Setting timeout_us is changed as expected. 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67257 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67257 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:05.572 Setting timeout_admin_us is changed as expected. 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67257 /tmp/settings_modified_67257 00:13:05.572 11:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67289 00:13:05.572 11:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67289 ']' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67289 00:13:05.572 11:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:13:05.572 11:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.572 11:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67289 00:13:05.573 11:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.573 killing process with pid 67289 00:13:05.573 11:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.573 11:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67289' 00:13:05.573 11:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67289 00:13:05.573 11:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67289 00:13:08.140 RPC TIMEOUT SETTING TEST PASSED. 00:13:08.140 11:20:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:08.140 00:13:08.140 real 0m4.975s 00:13:08.140 user 0m9.387s 00:13:08.140 sys 0m0.783s 00:13:08.140 11:20:35 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.140 ************************************ 00:13:08.140 END TEST nvme_rpc_timeouts 00:13:08.140 11:20:35 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:08.140 ************************************ 00:13:08.140 11:20:35 -- spdk/autotest.sh@239 -- # uname -s 00:13:08.140 11:20:35 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:13:08.140 11:20:35 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:08.140 11:20:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:08.140 11:20:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.140 11:20:35 -- common/autotest_common.sh@10 -- # set +x 00:13:08.140 ************************************ 00:13:08.140 START TEST sw_hotplug 00:13:08.140 ************************************ 00:13:08.140 11:20:35 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:08.140 * Looking for test storage... 00:13:08.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:08.140 11:20:35 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:08.402 11:20:35 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:13:08.402 11:20:35 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:08.402 11:20:35 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:08.402 11:20:35 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:13:08.402 11:20:35 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:08.403 11:20:35 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:08.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.403 --rc genhtml_branch_coverage=1 00:13:08.403 --rc genhtml_function_coverage=1 00:13:08.403 --rc genhtml_legend=1 00:13:08.403 --rc geninfo_all_blocks=1 00:13:08.403 --rc geninfo_unexecuted_blocks=1 00:13:08.403 00:13:08.403 ' 00:13:08.403 11:20:35 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:08.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.403 --rc genhtml_branch_coverage=1 00:13:08.403 --rc genhtml_function_coverage=1 00:13:08.403 --rc genhtml_legend=1 00:13:08.403 --rc geninfo_all_blocks=1 00:13:08.403 --rc geninfo_unexecuted_blocks=1 00:13:08.403 00:13:08.403 ' 00:13:08.403 11:20:35 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:08.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.403 --rc genhtml_branch_coverage=1 00:13:08.403 --rc genhtml_function_coverage=1 00:13:08.403 --rc genhtml_legend=1 00:13:08.403 --rc geninfo_all_blocks=1 00:13:08.403 --rc geninfo_unexecuted_blocks=1 00:13:08.403 00:13:08.403 ' 00:13:08.403 11:20:35 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:08.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.403 --rc genhtml_branch_coverage=1 00:13:08.403 --rc genhtml_function_coverage=1 00:13:08.403 --rc genhtml_legend=1 00:13:08.403 --rc geninfo_all_blocks=1 00:13:08.403 --rc geninfo_unexecuted_blocks=1 00:13:08.403 00:13:08.403 ' 00:13:08.403 11:20:35 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:08.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:09.230 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:09.230 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:09.230 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:09.230 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:09.230 11:20:36 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:13:09.230 11:20:36 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:13:09.230 11:20:36 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:13:09.230 11:20:36 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@233 -- # local class 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:09.230 11:20:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:13:09.231 11:20:36 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:09.231 11:20:36 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:13:09.231 11:20:36 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:13:09.231 11:20:36 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:09.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:10.057 Waiting for block devices as requested 00:13:10.057 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:10.317 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:10.317 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:10.576 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.845 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:15.845 11:20:42 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:13:15.845 11:20:42 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:16.104 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:13:16.363 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:16.363 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:13:16.622 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:13:16.881 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:16.881 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:17.140 11:20:44 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:13:17.140 11:20:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:17.140 11:20:44 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:13:17.140 11:20:44 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:13:17.140 11:20:44 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68181 00:13:17.140 11:20:44 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:13:17.140 11:20:44 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:13:17.140 11:20:44 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:17.140 11:20:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:13:17.140 11:20:44 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:17.140 11:20:44 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:17.140 11:20:44 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:17.140 11:20:44 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:17.140 11:20:44 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:13:17.140 11:20:44 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:17.140 11:20:44 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:17.140 11:20:44 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:13:17.140 11:20:44 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:17.140 11:20:44 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:17.399 Initializing NVMe Controllers 00:13:17.399 Attaching to 0000:00:10.0 00:13:17.399 Attaching to 0000:00:11.0 00:13:17.399 Attached to 0000:00:10.0 00:13:17.399 Attached to 0000:00:11.0 00:13:17.399 Initialization complete. Starting I/O... 00:13:17.399 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:13:17.400 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:13:17.400 00:13:18.337 QEMU NVMe Ctrl (12340 ): 1568 I/Os completed (+1568) 00:13:18.337 QEMU NVMe Ctrl (12341 ): 1568 I/Os completed (+1568) 00:13:18.337 00:13:19.715 QEMU NVMe Ctrl (12340 ): 3704 I/Os completed (+2136) 00:13:19.715 QEMU NVMe Ctrl (12341 ): 3704 I/Os completed (+2136) 00:13:19.715 00:13:20.651 QEMU NVMe Ctrl (12340 ): 5924 I/Os completed (+2220) 00:13:20.651 QEMU NVMe Ctrl (12341 ): 5924 I/Os completed (+2220) 00:13:20.651 00:13:21.636 QEMU NVMe Ctrl (12340 ): 8128 I/Os completed (+2204) 00:13:21.636 QEMU NVMe Ctrl (12341 ): 8128 I/Os completed (+2204) 00:13:21.636 00:13:22.587 QEMU NVMe Ctrl (12340 ): 10296 I/Os completed (+2168) 00:13:22.587 QEMU NVMe Ctrl (12341 ): 10296 I/Os completed (+2168) 00:13:22.587 00:13:23.155 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:23.155 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:23.155 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:23.155 [2024-12-10 11:20:50.183209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:23.155 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:23.155 [2024-12-10 11:20:50.185070] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 [2024-12-10 11:20:50.185152] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 [2024-12-10 11:20:50.185174] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 [2024-12-10 11:20:50.185196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:23.155 [2024-12-10 11:20:50.188017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 [2024-12-10 11:20:50.188076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 [2024-12-10 11:20:50.188096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 [2024-12-10 11:20:50.188116] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:23.155 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:23.155 [2024-12-10 11:20:50.216872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:23.155 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:23.155 [2024-12-10 11:20:50.218521] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 [2024-12-10 11:20:50.218572] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 [2024-12-10 11:20:50.218601] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 [2024-12-10 11:20:50.218622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:23.155 [2024-12-10 11:20:50.221266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 [2024-12-10 11:20:50.221314] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 [2024-12-10 11:20:50.221336] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 [2024-12-10 11:20:50.221356] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:23.155 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:23.155 EAL: Scan for (pci) bus failed. 00:13:23.155 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:23.155 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:23.412 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:23.412 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:23.412 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:23.412 00:13:23.412 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:23.412 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:23.412 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:23.412 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:23.412 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:23.412 Attaching to 0000:00:10.0 00:13:23.412 Attached to 0000:00:10.0 00:13:23.670 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:23.670 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:23.670 11:20:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:23.670 Attaching to 0000:00:11.0 00:13:23.670 Attached to 0000:00:11.0 00:13:24.603 QEMU NVMe Ctrl (12340 ): 1920 I/Os completed (+1920) 00:13:24.603 QEMU NVMe Ctrl (12341 ): 1708 I/Os completed (+1708) 00:13:24.603 00:13:25.539 QEMU NVMe Ctrl (12340 ): 4028 I/Os completed (+2108) 00:13:25.539 QEMU NVMe Ctrl (12341 ): 3818 I/Os completed (+2110) 00:13:25.539 00:13:26.474 QEMU NVMe Ctrl (12340 ): 6144 I/Os completed (+2116) 00:13:26.474 QEMU NVMe Ctrl (12341 ): 5934 I/Os completed (+2116) 00:13:26.474 00:13:27.409 QEMU NVMe Ctrl (12340 ): 8247 I/Os completed (+2103) 00:13:27.409 QEMU NVMe Ctrl (12341 ): 8041 I/Os completed (+2107) 00:13:27.409 00:13:28.344 QEMU NVMe Ctrl (12340 ): 10359 I/Os completed (+2112) 00:13:28.344 QEMU NVMe Ctrl (12341 ): 10154 I/Os completed (+2113) 00:13:28.344 00:13:29.280 QEMU NVMe Ctrl (12340 ): 12459 I/Os completed (+2100) 00:13:29.280 QEMU NVMe Ctrl (12341 ): 12254 I/Os completed (+2100) 00:13:29.280 00:13:30.702 QEMU NVMe Ctrl (12340 ): 14539 I/Os completed (+2080) 00:13:30.702 QEMU NVMe Ctrl (12341 ): 14337 I/Os completed (+2083) 00:13:30.702 00:13:31.269 QEMU NVMe Ctrl (12340 ): 16691 I/Os completed (+2152) 00:13:31.269 QEMU NVMe Ctrl (12341 ): 16493 I/Os completed (+2156) 00:13:31.269 00:13:32.646 QEMU NVMe Ctrl (12340 ): 18915 I/Os completed (+2224) 00:13:32.646 QEMU NVMe Ctrl (12341 ): 18717 I/Os completed (+2224) 00:13:32.646 00:13:33.583 QEMU NVMe Ctrl (12340 ): 21131 I/Os completed (+2216) 00:13:33.583 QEMU NVMe Ctrl (12341 ): 20933 I/Os completed (+2216) 00:13:33.583 00:13:34.519 QEMU NVMe Ctrl (12340 ): 23275 I/Os completed (+2144) 00:13:34.519 QEMU NVMe Ctrl (12341 ): 23089 I/Os completed (+2156) 00:13:34.519 00:13:35.458 QEMU NVMe Ctrl (12340 ): 25515 I/Os completed (+2240) 00:13:35.458 QEMU NVMe Ctrl (12341 ): 25329 I/Os completed (+2240) 00:13:35.458 00:13:35.458 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:35.458 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:35.458 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:35.458 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:35.458 [2024-12-10 11:21:02.567099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:35.458 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:35.458 [2024-12-10 11:21:02.568769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.458 [2024-12-10 11:21:02.568826] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.458 [2024-12-10 11:21:02.568850] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.458 [2024-12-10 11:21:02.568876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.458 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:35.716 [2024-12-10 11:21:02.571643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.716 [2024-12-10 11:21:02.571697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.716 [2024-12-10 11:21:02.571716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.716 [2024-12-10 11:21:02.571736] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.716 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:35.716 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:35.716 [2024-12-10 11:21:02.606484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:35.716 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:35.716 [2024-12-10 11:21:02.608068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.716 [2024-12-10 11:21:02.608116] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.716 [2024-12-10 11:21:02.608143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.716 [2024-12-10 11:21:02.608162] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.716 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:35.716 [2024-12-10 11:21:02.610650] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.716 [2024-12-10 11:21:02.610696] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.716 [2024-12-10 11:21:02.610716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.716 [2024-12-10 11:21:02.610736] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.716 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:35.716 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:35.716 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:35.716 EAL: Scan for (pci) bus failed. 00:13:35.716 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:35.716 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:35.716 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:35.716 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:35.975 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:35.975 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:35.975 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:35.975 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:35.975 Attaching to 0000:00:10.0 00:13:35.975 Attached to 0000:00:10.0 00:13:35.975 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:35.975 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:35.975 11:21:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:35.975 Attaching to 0000:00:11.0 00:13:35.975 Attached to 0000:00:11.0 00:13:36.543 QEMU NVMe Ctrl (12340 ): 1184 I/Os completed (+1184) 00:13:36.543 QEMU NVMe Ctrl (12341 ): 952 I/Os completed (+952) 00:13:36.543 00:13:37.487 QEMU NVMe Ctrl (12340 ): 3404 I/Os completed (+2220) 00:13:37.487 QEMU NVMe Ctrl (12341 ): 3172 I/Os completed (+2220) 00:13:37.487 00:13:38.421 QEMU NVMe Ctrl (12340 ): 5636 I/Os completed (+2232) 00:13:38.421 QEMU NVMe Ctrl (12341 ): 5404 I/Os completed (+2232) 00:13:38.421 00:13:39.355 QEMU NVMe Ctrl (12340 ): 7868 I/Os completed (+2232) 00:13:39.355 QEMU NVMe Ctrl (12341 ): 7636 I/Os completed (+2232) 00:13:39.355 00:13:40.288 QEMU NVMe Ctrl (12340 ): 10092 I/Os completed (+2224) 00:13:40.288 QEMU NVMe Ctrl (12341 ): 9860 I/Os completed (+2224) 00:13:40.288 00:13:41.664 QEMU NVMe Ctrl (12340 ): 12324 I/Os completed (+2232) 00:13:41.664 QEMU NVMe Ctrl (12341 ): 12092 I/Os completed (+2232) 00:13:41.664 00:13:42.595 QEMU NVMe Ctrl (12340 ): 14560 I/Os completed (+2236) 00:13:42.595 QEMU NVMe Ctrl (12341 ): 14328 I/Os completed (+2236) 00:13:42.595 00:13:43.528 QEMU NVMe Ctrl (12340 ): 16796 I/Os completed (+2236) 00:13:43.528 QEMU NVMe Ctrl (12341 ): 16564 I/Os completed (+2236) 00:13:43.528 00:13:44.467 QEMU NVMe Ctrl (12340 ): 19020 I/Os completed (+2224) 00:13:44.467 QEMU NVMe Ctrl (12341 ): 18788 I/Os completed (+2224) 00:13:44.467 00:13:45.411 QEMU NVMe Ctrl (12340 ): 21264 I/Os completed (+2244) 00:13:45.411 QEMU NVMe Ctrl (12341 ): 21032 I/Os completed (+2244) 00:13:45.411 00:13:46.347 QEMU NVMe Ctrl (12340 ): 23476 I/Os completed (+2212) 00:13:46.347 QEMU NVMe Ctrl (12341 ): 23244 I/Os completed (+2212) 00:13:46.347 00:13:47.284 QEMU NVMe Ctrl (12340 ): 25688 I/Os completed (+2212) 00:13:47.284 QEMU NVMe Ctrl (12341 ): 25456 I/Os completed (+2212) 00:13:47.284 00:13:47.851 11:21:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:47.851 11:21:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:47.851 11:21:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:47.851 11:21:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:47.851 [2024-12-10 11:21:14.946970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:47.851 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:47.851 [2024-12-10 11:21:14.948785] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:47.851 [2024-12-10 11:21:14.948962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:47.851 [2024-12-10 11:21:14.949029] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:47.851 [2024-12-10 11:21:14.949126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:47.851 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:47.851 [2024-12-10 11:21:14.952105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:47.851 [2024-12-10 11:21:14.952239] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:47.851 [2024-12-10 11:21:14.952289] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:47.851 [2024-12-10 11:21:14.952392] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.111 11:21:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:48.111 11:21:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:48.111 [2024-12-10 11:21:14.985994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:48.111 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:48.111 [2024-12-10 11:21:14.990068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.111 [2024-12-10 11:21:14.990202] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.111 [2024-12-10 11:21:14.990258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.111 [2024-12-10 11:21:14.990299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.111 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:48.111 [2024-12-10 11:21:14.993004] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.111 [2024-12-10 11:21:14.993079] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.111 [2024-12-10 11:21:14.993128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.111 [2024-12-10 11:21:14.993171] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.111 11:21:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:48.111 11:21:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:48.111 11:21:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:48.111 11:21:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:48.111 11:21:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:48.111 11:21:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:48.370 11:21:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:48.370 11:21:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:48.370 11:21:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:48.370 11:21:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:48.370 Attaching to 0000:00:10.0 00:13:48.370 Attached to 0000:00:10.0 00:13:48.370 11:21:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:48.370 11:21:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:48.370 11:21:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:48.370 Attaching to 0000:00:11.0 00:13:48.370 Attached to 0000:00:11.0 00:13:48.370 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:48.370 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:48.370 [2024-12-10 11:21:15.330663] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:14:00.576 11:21:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:00.576 11:21:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:00.576 11:21:27 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.15 00:14:00.576 11:21:27 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.15 00:14:00.576 11:21:27 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:00.576 11:21:27 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.15 00:14:00.576 11:21:27 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.15 2 00:14:00.576 remove_attach_helper took 43.15s to complete (handling 2 nvme drive(s)) 11:21:27 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:14:07.142 11:21:33 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68181 00:14:07.142 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68181) - No such process 00:14:07.142 11:21:33 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68181 00:14:07.142 11:21:33 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:14:07.142 11:21:33 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:14:07.142 11:21:33 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:14:07.142 11:21:33 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68731 00:14:07.142 11:21:33 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:07.142 11:21:33 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:14:07.142 11:21:33 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68731 00:14:07.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.142 11:21:33 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68731 ']' 00:14:07.142 11:21:33 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.142 11:21:33 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.142 11:21:33 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.142 11:21:33 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.142 11:21:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:07.142 [2024-12-10 11:21:33.445299] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:14:07.142 [2024-12-10 11:21:33.445435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68731 ] 00:14:07.142 [2024-12-10 11:21:33.625031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.142 [2024-12-10 11:21:33.734439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.712 11:21:34 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.712 11:21:34 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:14:07.712 11:21:34 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:07.712 11:21:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.712 11:21:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:07.712 11:21:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.712 11:21:34 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:14:07.712 11:21:34 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:07.712 11:21:34 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:07.712 11:21:34 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:07.712 11:21:34 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:07.712 11:21:34 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:07.712 11:21:34 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:07.712 11:21:34 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:07.712 11:21:34 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:07.712 11:21:34 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:07.712 11:21:34 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:07.712 11:21:34 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:07.712 11:21:34 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:14.282 11:21:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:14.282 11:21:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:14.282 11:21:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:14.282 11:21:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:14.282 11:21:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:14.282 11:21:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:14.282 11:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:14.282 11:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:14.282 11:21:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:14.282 11:21:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:14.282 11:21:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:14.282 11:21:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.282 11:21:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:14.282 [2024-12-10 11:21:40.688032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:14.282 [2024-12-10 11:21:40.690672] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.282 [2024-12-10 11:21:40.690854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.282 [2024-12-10 11:21:40.690885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.282 [2024-12-10 11:21:40.690947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.282 [2024-12-10 11:21:40.690963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.282 [2024-12-10 11:21:40.690979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.282 [2024-12-10 11:21:40.690993] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.282 [2024-12-10 11:21:40.691007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.282 [2024-12-10 11:21:40.691019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.282 [2024-12-10 11:21:40.691039] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.282 [2024-12-10 11:21:40.691051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.282 [2024-12-10 11:21:40.691065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.282 11:21:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.282 11:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:14.282 11:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:14.282 [2024-12-10 11:21:41.087321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:14.282 [2024-12-10 11:21:41.089823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.282 [2024-12-10 11:21:41.089869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.282 [2024-12-10 11:21:41.089890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.282 [2024-12-10 11:21:41.089930] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.282 [2024-12-10 11:21:41.089946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.282 [2024-12-10 11:21:41.089960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.282 [2024-12-10 11:21:41.089976] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.282 [2024-12-10 11:21:41.089987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.282 [2024-12-10 11:21:41.090001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.282 [2024-12-10 11:21:41.090014] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.282 [2024-12-10 11:21:41.090027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.282 [2024-12-10 11:21:41.090039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.282 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:14.282 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:14.282 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:14.282 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:14.282 11:21:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.282 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:14.282 11:21:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:14.282 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:14.282 11:21:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.282 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:14.282 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:14.282 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:14.282 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:14.282 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:14.542 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:14.542 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:14.542 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:14.542 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:14.542 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:14.542 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:14.542 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:14.542 11:21:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:26.752 11:21:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.752 11:21:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:26.752 11:21:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:26.752 11:21:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.752 11:21:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:26.752 [2024-12-10 11:21:53.766899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:26.752 [2024-12-10 11:21:53.769326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.752 [2024-12-10 11:21:53.769467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.752 [2024-12-10 11:21:53.769619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.752 [2024-12-10 11:21:53.769754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.752 [2024-12-10 11:21:53.769795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.752 [2024-12-10 11:21:53.769896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.752 [2024-12-10 11:21:53.770103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.752 [2024-12-10 11:21:53.770146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.752 [2024-12-10 11:21:53.770195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.752 [2024-12-10 11:21:53.770301] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.752 [2024-12-10 11:21:53.770339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.752 [2024-12-10 11:21:53.770391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.752 11:21:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:26.752 11:21:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:27.321 [2024-12-10 11:21:54.166271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:27.321 [2024-12-10 11:21:54.168770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:27.321 [2024-12-10 11:21:54.168813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.321 [2024-12-10 11:21:54.168836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.321 [2024-12-10 11:21:54.168858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:27.321 [2024-12-10 11:21:54.168872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.321 [2024-12-10 11:21:54.168885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.321 [2024-12-10 11:21:54.168901] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:27.321 [2024-12-10 11:21:54.168912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.321 [2024-12-10 11:21:54.168994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.321 [2024-12-10 11:21:54.169008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:27.321 [2024-12-10 11:21:54.169022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.321 [2024-12-10 11:21:54.169034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.321 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:27.321 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:27.321 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:27.321 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:27.321 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:27.321 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:27.321 11:21:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.321 11:21:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:27.321 11:21:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.321 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:27.321 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:27.580 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:27.580 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:27.580 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:27.580 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:27.580 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:27.580 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:27.580 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:27.580 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:27.580 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:27.580 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:27.580 11:21:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:39.788 11:22:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.788 11:22:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:39.788 11:22:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:39.788 [2024-12-10 11:22:06.746034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:39.788 [2024-12-10 11:22:06.748692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:39.788 [2024-12-10 11:22:06.748843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.788 [2024-12-10 11:22:06.749051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.788 [2024-12-10 11:22:06.749192] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:39.788 [2024-12-10 11:22:06.749232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.788 [2024-12-10 11:22:06.749343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.788 [2024-12-10 11:22:06.749405] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:39.788 [2024-12-10 11:22:06.749495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.788 [2024-12-10 11:22:06.749555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.788 [2024-12-10 11:22:06.749659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:39.788 [2024-12-10 11:22:06.749701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.788 [2024-12-10 11:22:06.749856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:39.788 11:22:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.788 11:22:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:39.788 11:22:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:39.788 11:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:40.048 [2024-12-10 11:22:07.145405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:40.048 [2024-12-10 11:22:07.147784] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:40.048 [2024-12-10 11:22:07.147935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.048 [2024-12-10 11:22:07.148089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.048 [2024-12-10 11:22:07.148202] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:40.048 [2024-12-10 11:22:07.148244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.048 [2024-12-10 11:22:07.148345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.048 [2024-12-10 11:22:07.148448] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:40.048 [2024-12-10 11:22:07.148486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.048 [2024-12-10 11:22:07.148585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.048 [2024-12-10 11:22:07.148640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:40.048 [2024-12-10 11:22:07.148720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.048 [2024-12-10 11:22:07.148774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.307 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:40.307 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:40.307 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:40.307 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:40.307 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:40.307 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:40.307 11:22:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.307 11:22:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:40.307 11:22:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.307 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:40.307 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:40.566 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:40.566 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:40.566 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:40.566 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:40.566 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:40.566 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:40.566 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:40.566 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:40.825 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:40.825 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:40.825 11:22:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:53.026 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:53.026 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:53.026 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:53.026 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:53.027 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:53.027 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.027 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:53.027 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.16 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.16 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:53.027 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.16 00:14:53.027 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.16 2 00:14:53.027 remove_attach_helper took 45.16s to complete (handling 2 nvme drive(s)) 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.027 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.027 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:14:53.027 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:53.027 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:53.027 11:22:19 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:53.027 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:53.027 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:53.027 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:53.027 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:53.027 11:22:19 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:59.595 11:22:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:59.595 11:22:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:59.595 11:22:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:59.595 11:22:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:59.595 11:22:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:59.595 11:22:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:59.595 11:22:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:59.595 11:22:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:59.595 11:22:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:59.595 11:22:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:59.596 11:22:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:59.596 11:22:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.596 11:22:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:59.596 [2024-12-10 11:22:25.885331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:59.596 [2024-12-10 11:22:25.887111] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.596 [2024-12-10 11:22:25.887156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.596 [2024-12-10 11:22:25.887176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.596 [2024-12-10 11:22:25.887203] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.596 [2024-12-10 11:22:25.887214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.596 [2024-12-10 11:22:25.887229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.596 [2024-12-10 11:22:25.887242] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.596 [2024-12-10 11:22:25.887259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.596 [2024-12-10 11:22:25.887270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.596 [2024-12-10 11:22:25.887286] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.596 [2024-12-10 11:22:25.887297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.596 [2024-12-10 11:22:25.887314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.596 11:22:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.596 11:22:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:59.596 11:22:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:59.596 11:22:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:59.596 11:22:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:59.596 11:22:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:59.596 11:22:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:59.596 11:22:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:59.596 11:22:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:59.596 11:22:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.596 11:22:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:59.596 11:22:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.596 11:22:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:59.596 11:22:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:59.596 [2024-12-10 11:22:26.484347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:59.596 [2024-12-10 11:22:26.486131] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.596 [2024-12-10 11:22:26.486175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.596 [2024-12-10 11:22:26.486194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.596 [2024-12-10 11:22:26.486216] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.596 [2024-12-10 11:22:26.486230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.596 [2024-12-10 11:22:26.486243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.596 [2024-12-10 11:22:26.486259] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.596 [2024-12-10 11:22:26.486270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.596 [2024-12-10 11:22:26.486285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.596 [2024-12-10 11:22:26.486299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.596 [2024-12-10 11:22:26.486312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.596 [2024-12-10 11:22:26.486323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.164 11:22:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:00.164 11:22:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:00.164 11:22:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:00.164 11:22:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:00.164 11:22:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:00.164 11:22:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:00.164 11:22:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.164 11:22:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:00.164 11:22:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.164 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:00.164 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:00.164 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:00.164 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:00.164 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:00.164 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:00.423 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:00.423 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:00.423 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:00.423 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:00.423 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:00.423 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:00.423 11:22:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:12.718 11:22:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.718 11:22:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:12.718 11:22:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:12.718 [2024-12-10 11:22:39.463488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:12.718 [2024-12-10 11:22:39.466149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.718 [2024-12-10 11:22:39.466198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.718 [2024-12-10 11:22:39.466215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.718 [2024-12-10 11:22:39.466242] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.718 [2024-12-10 11:22:39.466254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.718 [2024-12-10 11:22:39.466268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.718 [2024-12-10 11:22:39.466281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.718 [2024-12-10 11:22:39.466295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.718 [2024-12-10 11:22:39.466306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.718 [2024-12-10 11:22:39.466322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.718 [2024-12-10 11:22:39.466333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.718 [2024-12-10 11:22:39.466348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:12.718 11:22:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.718 11:22:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:12.718 11:22:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:12.718 11:22:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:12.977 [2024-12-10 11:22:39.862833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:12.977 [2024-12-10 11:22:39.864427] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.977 [2024-12-10 11:22:39.864466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.977 [2024-12-10 11:22:39.864486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.977 [2024-12-10 11:22:39.864505] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.978 [2024-12-10 11:22:39.864522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.978 [2024-12-10 11:22:39.864534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.978 [2024-12-10 11:22:39.864553] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.978 [2024-12-10 11:22:39.864564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.978 [2024-12-10 11:22:39.864578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.978 [2024-12-10 11:22:39.864591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.978 [2024-12-10 11:22:39.864605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.978 [2024-12-10 11:22:39.864616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.978 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:12.978 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:12.978 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:12.978 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:12.978 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:12.978 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:12.978 11:22:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.978 11:22:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:12.978 11:22:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.237 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:13.237 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:13.237 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:13.237 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:13.237 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:13.237 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:13.496 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:13.496 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:13.496 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:13.496 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:13.496 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:13.496 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:13.496 11:22:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:25.706 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:25.706 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:25.706 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:25.707 11:22:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.707 11:22:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:25.707 11:22:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:25.707 [2024-12-10 11:22:52.542452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:25.707 [2024-12-10 11:22:52.545327] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:25.707 [2024-12-10 11:22:52.545372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.707 [2024-12-10 11:22:52.545388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.707 [2024-12-10 11:22:52.545414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:25.707 [2024-12-10 11:22:52.545426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.707 [2024-12-10 11:22:52.545441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.707 [2024-12-10 11:22:52.545455] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:25.707 [2024-12-10 11:22:52.545472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.707 [2024-12-10 11:22:52.545484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.707 [2024-12-10 11:22:52.545499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:25.707 [2024-12-10 11:22:52.545510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.707 [2024-12-10 11:22:52.545524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:25.707 11:22:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.707 11:22:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:25.707 11:22:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:25.707 11:22:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:25.966 [2024-12-10 11:22:52.941833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:25.966 [2024-12-10 11:22:52.943709] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:25.966 [2024-12-10 11:22:52.943752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.966 [2024-12-10 11:22:52.943773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.966 [2024-12-10 11:22:52.943796] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:25.966 [2024-12-10 11:22:52.943811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.966 [2024-12-10 11:22:52.943823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.966 [2024-12-10 11:22:52.943840] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:25.966 [2024-12-10 11:22:52.943852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.966 [2024-12-10 11:22:52.943866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.966 [2024-12-10 11:22:52.943879] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:25.966 [2024-12-10 11:22:52.943896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.966 [2024-12-10 11:22:52.943907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.225 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:26.225 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:26.225 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:26.225 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:26.225 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:26.225 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:26.225 11:22:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.225 11:22:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:26.225 11:22:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.225 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:26.225 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:26.225 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:26.225 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:26.225 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:26.485 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:26.485 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:26.485 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:26.485 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:26.485 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:26.485 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:26.485 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:26.485 11:22:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:38.762 11:23:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:38.762 11:23:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:38.762 11:23:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:38.762 11:23:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:38.762 11:23:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:38.762 11:23:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:38.762 11:23:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.762 11:23:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:38.762 11:23:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.762 11:23:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:38.762 11:23:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:38.762 11:23:05 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.78 00:15:38.762 11:23:05 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.78 00:15:38.762 11:23:05 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:38.762 11:23:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.78 00:15:38.762 11:23:05 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.78 2 00:15:38.762 remove_attach_helper took 45.78s to complete (handling 2 nvme drive(s)) 11:23:05 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:15:38.762 11:23:05 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68731 00:15:38.762 11:23:05 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68731 ']' 00:15:38.762 11:23:05 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68731 00:15:38.762 11:23:05 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:15:38.762 11:23:05 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.763 11:23:05 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68731 00:15:38.763 11:23:05 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.763 11:23:05 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.763 killing process with pid 68731 00:15:38.763 11:23:05 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68731' 00:15:38.763 11:23:05 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68731 00:15:38.763 11:23:05 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68731 00:15:41.298 11:23:07 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:41.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:41.866 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:41.866 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:42.125 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:42.125 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:42.125 00:15:42.125 real 2m34.114s 00:15:42.125 user 1m51.776s 00:15:42.125 sys 0m22.683s 00:15:42.125 11:23:09 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.125 11:23:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:42.125 ************************************ 00:15:42.125 END TEST sw_hotplug 00:15:42.125 ************************************ 00:15:42.384 11:23:09 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:15:42.384 11:23:09 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:42.384 11:23:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:42.384 11:23:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.384 11:23:09 -- common/autotest_common.sh@10 -- # set +x 00:15:42.384 ************************************ 00:15:42.384 START TEST nvme_xnvme 00:15:42.384 ************************************ 00:15:42.384 11:23:09 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:42.384 * Looking for test storage... 00:15:42.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:42.384 11:23:09 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:42.384 11:23:09 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:42.384 11:23:09 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:42.384 11:23:09 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:42.384 11:23:09 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.384 11:23:09 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.384 11:23:09 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.384 11:23:09 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.384 11:23:09 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.384 11:23:09 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.384 11:23:09 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.384 11:23:09 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.384 11:23:09 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.384 11:23:09 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.384 11:23:09 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.384 11:23:09 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:42.384 11:23:09 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:15:42.384 11:23:09 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.384 11:23:09 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.646 11:23:09 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:42.646 11:23:09 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:42.646 11:23:09 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.646 11:23:09 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:42.646 11:23:09 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.646 11:23:09 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:42.646 11:23:09 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:42.646 11:23:09 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.646 11:23:09 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:42.646 11:23:09 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.646 11:23:09 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.646 11:23:09 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.646 11:23:09 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:15:42.646 11:23:09 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.646 11:23:09 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:42.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.646 --rc genhtml_branch_coverage=1 00:15:42.646 --rc genhtml_function_coverage=1 00:15:42.646 --rc genhtml_legend=1 00:15:42.646 --rc geninfo_all_blocks=1 00:15:42.646 --rc geninfo_unexecuted_blocks=1 00:15:42.646 00:15:42.646 ' 00:15:42.646 11:23:09 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:42.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.646 --rc genhtml_branch_coverage=1 00:15:42.646 --rc genhtml_function_coverage=1 00:15:42.646 --rc genhtml_legend=1 00:15:42.646 --rc geninfo_all_blocks=1 00:15:42.646 --rc geninfo_unexecuted_blocks=1 00:15:42.646 00:15:42.646 ' 00:15:42.646 11:23:09 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:42.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.646 --rc genhtml_branch_coverage=1 00:15:42.646 --rc genhtml_function_coverage=1 00:15:42.646 --rc genhtml_legend=1 00:15:42.646 --rc geninfo_all_blocks=1 00:15:42.646 --rc geninfo_unexecuted_blocks=1 00:15:42.646 00:15:42.646 ' 00:15:42.646 11:23:09 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:42.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.646 --rc genhtml_branch_coverage=1 00:15:42.646 --rc genhtml_function_coverage=1 00:15:42.646 --rc genhtml_legend=1 00:15:42.646 --rc geninfo_all_blocks=1 00:15:42.646 --rc geninfo_unexecuted_blocks=1 00:15:42.646 00:15:42.646 ' 00:15:42.646 11:23:09 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:15:42.646 11:23:09 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:15:42.646 11:23:09 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:42.646 11:23:09 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:15:42.646 11:23:09 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:42.646 11:23:09 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:42.646 11:23:09 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:42.646 11:23:09 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:15:42.646 11:23:09 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:15:42.646 11:23:09 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:42.646 11:23:09 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:42.647 11:23:09 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:42.647 11:23:09 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:42.647 11:23:09 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:42.647 11:23:09 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:15:42.647 11:23:09 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:15:42.647 11:23:09 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:15:42.647 11:23:09 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:15:42.647 11:23:09 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:15:42.647 11:23:09 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:15:42.647 11:23:09 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:42.647 11:23:09 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:42.647 11:23:09 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:42.647 11:23:09 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:42.647 11:23:09 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:42.647 11:23:09 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:42.647 11:23:09 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:15:42.647 11:23:09 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:42.647 #define SPDK_CONFIG_H 00:15:42.647 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:42.647 #define SPDK_CONFIG_APPS 1 00:15:42.647 #define SPDK_CONFIG_ARCH native 00:15:42.647 #define SPDK_CONFIG_ASAN 1 00:15:42.647 #undef SPDK_CONFIG_AVAHI 00:15:42.647 #undef SPDK_CONFIG_CET 00:15:42.647 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:42.647 #define SPDK_CONFIG_COVERAGE 1 00:15:42.647 #define SPDK_CONFIG_CROSS_PREFIX 00:15:42.647 #undef SPDK_CONFIG_CRYPTO 00:15:42.647 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:42.647 #undef SPDK_CONFIG_CUSTOMOCF 00:15:42.647 #undef SPDK_CONFIG_DAOS 00:15:42.647 #define SPDK_CONFIG_DAOS_DIR 00:15:42.647 #define SPDK_CONFIG_DEBUG 1 00:15:42.647 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:42.647 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:42.647 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:42.647 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:42.647 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:42.647 #undef SPDK_CONFIG_DPDK_UADK 00:15:42.647 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:42.647 #define SPDK_CONFIG_EXAMPLES 1 00:15:42.647 #undef SPDK_CONFIG_FC 00:15:42.647 #define SPDK_CONFIG_FC_PATH 00:15:42.647 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:42.647 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:42.647 #define SPDK_CONFIG_FSDEV 1 00:15:42.647 #undef SPDK_CONFIG_FUSE 00:15:42.647 #undef SPDK_CONFIG_FUZZER 00:15:42.647 #define SPDK_CONFIG_FUZZER_LIB 00:15:42.647 #undef SPDK_CONFIG_GOLANG 00:15:42.647 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:42.647 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:42.647 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:42.647 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:42.647 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:42.647 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:42.647 #undef SPDK_CONFIG_HAVE_LZ4 00:15:42.647 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:42.647 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:42.647 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:42.647 #define SPDK_CONFIG_IDXD 1 00:15:42.647 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:42.647 #undef SPDK_CONFIG_IPSEC_MB 00:15:42.647 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:42.647 #define SPDK_CONFIG_ISAL 1 00:15:42.647 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:42.647 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:42.647 #define SPDK_CONFIG_LIBDIR 00:15:42.647 #undef SPDK_CONFIG_LTO 00:15:42.647 #define SPDK_CONFIG_MAX_LCORES 128 00:15:42.647 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:42.647 #define SPDK_CONFIG_NVME_CUSE 1 00:15:42.647 #undef SPDK_CONFIG_OCF 00:15:42.647 #define SPDK_CONFIG_OCF_PATH 00:15:42.647 #define SPDK_CONFIG_OPENSSL_PATH 00:15:42.647 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:42.648 #define SPDK_CONFIG_PGO_DIR 00:15:42.648 #undef SPDK_CONFIG_PGO_USE 00:15:42.648 #define SPDK_CONFIG_PREFIX /usr/local 00:15:42.648 #undef SPDK_CONFIG_RAID5F 00:15:42.648 #undef SPDK_CONFIG_RBD 00:15:42.648 #define SPDK_CONFIG_RDMA 1 00:15:42.648 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:42.648 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:42.648 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:42.648 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:42.648 #define SPDK_CONFIG_SHARED 1 00:15:42.648 #undef SPDK_CONFIG_SMA 00:15:42.648 #define SPDK_CONFIG_TESTS 1 00:15:42.648 #undef SPDK_CONFIG_TSAN 00:15:42.648 #define SPDK_CONFIG_UBLK 1 00:15:42.648 #define SPDK_CONFIG_UBSAN 1 00:15:42.648 #undef SPDK_CONFIG_UNIT_TESTS 00:15:42.648 #undef SPDK_CONFIG_URING 00:15:42.648 #define SPDK_CONFIG_URING_PATH 00:15:42.648 #undef SPDK_CONFIG_URING_ZNS 00:15:42.648 #undef SPDK_CONFIG_USDT 00:15:42.648 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:42.648 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:42.648 #undef SPDK_CONFIG_VFIO_USER 00:15:42.648 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:42.648 #define SPDK_CONFIG_VHOST 1 00:15:42.648 #define SPDK_CONFIG_VIRTIO 1 00:15:42.648 #undef SPDK_CONFIG_VTUNE 00:15:42.648 #define SPDK_CONFIG_VTUNE_DIR 00:15:42.648 #define SPDK_CONFIG_WERROR 1 00:15:42.648 #define SPDK_CONFIG_WPDK_DIR 00:15:42.648 #define SPDK_CONFIG_XNVME 1 00:15:42.648 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:42.648 11:23:09 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.648 11:23:09 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:15:42.648 11:23:09 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.648 11:23:09 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.648 11:23:09 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.648 11:23:09 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.648 11:23:09 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.648 11:23:09 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.648 11:23:09 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:42.648 11:23:09 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@68 -- # uname -s 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:15:42.648 11:23:09 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:15:42.648 11:23:09 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70086 ]] 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70086 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:15:42.649 11:23:09 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ghc2hz 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.ghc2hz/tests/xnvme /tmp/spdk.ghc2hz 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974323200 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593825280 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974323200 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593825280 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=95498674176 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4204105728 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:15:42.650 * Looking for test storage... 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974323200 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:42.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:42.650 11:23:09 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:42.910 11:23:09 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:15:42.911 11:23:09 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.911 11:23:09 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:42.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.911 --rc genhtml_branch_coverage=1 00:15:42.911 --rc genhtml_function_coverage=1 00:15:42.911 --rc genhtml_legend=1 00:15:42.911 --rc geninfo_all_blocks=1 00:15:42.911 --rc geninfo_unexecuted_blocks=1 00:15:42.911 00:15:42.911 ' 00:15:42.911 11:23:09 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:42.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.911 --rc genhtml_branch_coverage=1 00:15:42.911 --rc genhtml_function_coverage=1 00:15:42.911 --rc genhtml_legend=1 00:15:42.911 --rc geninfo_all_blocks=1 00:15:42.911 --rc geninfo_unexecuted_blocks=1 00:15:42.911 00:15:42.911 ' 00:15:42.911 11:23:09 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:42.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.911 --rc genhtml_branch_coverage=1 00:15:42.911 --rc genhtml_function_coverage=1 00:15:42.911 --rc genhtml_legend=1 00:15:42.911 --rc geninfo_all_blocks=1 00:15:42.911 --rc geninfo_unexecuted_blocks=1 00:15:42.911 00:15:42.911 ' 00:15:42.911 11:23:09 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:42.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.911 --rc genhtml_branch_coverage=1 00:15:42.911 --rc genhtml_function_coverage=1 00:15:42.911 --rc genhtml_legend=1 00:15:42.911 --rc geninfo_all_blocks=1 00:15:42.911 --rc geninfo_unexecuted_blocks=1 00:15:42.911 00:15:42.911 ' 00:15:42.911 11:23:09 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.911 11:23:09 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.911 11:23:09 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.911 11:23:09 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.911 11:23:09 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.911 11:23:09 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:42.911 11:23:09 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:15:42.911 11:23:09 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:43.479 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:43.738 Waiting for block devices as requested 00:15:43.738 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:43.738 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:44.006 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:44.006 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:49.274 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:49.275 11:23:16 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:15:49.533 11:23:16 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:15:49.533 11:23:16 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:15:49.793 11:23:16 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:15:49.793 11:23:16 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:15:49.793 No valid GPT data, bailing 00:15:49.793 11:23:16 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:49.793 11:23:16 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:15:49.793 11:23:16 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:49.793 11:23:16 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:49.793 11:23:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:49.793 11:23:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.793 11:23:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.793 ************************************ 00:15:49.793 START TEST xnvme_rpc 00:15:49.793 ************************************ 00:15:49.793 11:23:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:49.793 11:23:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:49.793 11:23:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:49.793 11:23:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:49.793 11:23:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:49.793 11:23:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70486 00:15:49.793 11:23:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70486 00:15:49.793 11:23:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70486 ']' 00:15:49.793 11:23:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:49.793 11:23:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.793 11:23:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.793 11:23:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.793 11:23:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.793 11:23:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.053 [2024-12-10 11:23:17.005249] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:15:50.053 [2024-12-10 11:23:17.005374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70486 ] 00:15:50.311 [2024-12-10 11:23:17.187881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.311 [2024-12-10 11:23:17.297275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.249 xnvme_bdev 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.249 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70486 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70486 ']' 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70486 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70486 00:15:51.508 killing process with pid 70486 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70486' 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70486 00:15:51.508 11:23:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70486 00:15:54.044 ************************************ 00:15:54.044 END TEST xnvme_rpc 00:15:54.044 ************************************ 00:15:54.044 00:15:54.044 real 0m3.920s 00:15:54.044 user 0m3.946s 00:15:54.044 sys 0m0.552s 00:15:54.044 11:23:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.044 11:23:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.044 11:23:20 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:54.044 11:23:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:54.044 11:23:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.044 11:23:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:54.044 ************************************ 00:15:54.044 START TEST xnvme_bdevperf 00:15:54.044 ************************************ 00:15:54.044 11:23:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:54.044 11:23:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:54.044 11:23:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:15:54.044 11:23:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:54.044 11:23:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:54.044 11:23:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:54.044 11:23:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:54.044 11:23:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:54.044 { 00:15:54.044 "subsystems": [ 00:15:54.044 { 00:15:54.044 "subsystem": "bdev", 00:15:54.044 "config": [ 00:15:54.044 { 00:15:54.044 "params": { 00:15:54.044 "io_mechanism": "libaio", 00:15:54.044 "conserve_cpu": false, 00:15:54.044 "filename": "/dev/nvme0n1", 00:15:54.044 "name": "xnvme_bdev" 00:15:54.044 }, 00:15:54.044 "method": "bdev_xnvme_create" 00:15:54.044 }, 00:15:54.044 { 00:15:54.044 "method": "bdev_wait_for_examine" 00:15:54.044 } 00:15:54.044 ] 00:15:54.044 } 00:15:54.044 ] 00:15:54.044 } 00:15:54.044 [2024-12-10 11:23:20.974183] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:15:54.044 [2024-12-10 11:23:20.974309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70567 ] 00:15:54.044 [2024-12-10 11:23:21.143965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.302 [2024-12-10 11:23:21.256448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.561 Running I/O for 5 seconds... 00:15:56.875 47615.00 IOPS, 186.00 MiB/s [2024-12-10T11:23:24.927Z] 47570.50 IOPS, 185.82 MiB/s [2024-12-10T11:23:25.864Z] 47685.67 IOPS, 186.27 MiB/s [2024-12-10T11:23:26.802Z] 47112.25 IOPS, 184.03 MiB/s 00:15:59.688 Latency(us) 00:15:59.688 [2024-12-10T11:23:26.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.688 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:59.688 xnvme_bdev : 5.00 45539.36 177.89 0.00 0.00 1402.14 149.69 4132.19 00:15:59.688 [2024-12-10T11:23:26.802Z] =================================================================================================================== 00:15:59.688 [2024-12-10T11:23:26.802Z] Total : 45539.36 177.89 0.00 0.00 1402.14 149.69 4132.19 00:16:00.627 11:23:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:00.627 11:23:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:00.627 11:23:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:00.627 11:23:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:00.627 11:23:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:00.887 { 00:16:00.887 "subsystems": [ 00:16:00.887 { 00:16:00.887 "subsystem": "bdev", 00:16:00.887 "config": [ 00:16:00.887 { 00:16:00.887 "params": { 00:16:00.887 "io_mechanism": "libaio", 00:16:00.887 "conserve_cpu": false, 00:16:00.887 "filename": "/dev/nvme0n1", 00:16:00.887 "name": "xnvme_bdev" 00:16:00.887 }, 00:16:00.887 "method": "bdev_xnvme_create" 00:16:00.887 }, 00:16:00.887 { 00:16:00.887 "method": "bdev_wait_for_examine" 00:16:00.887 } 00:16:00.887 ] 00:16:00.887 } 00:16:00.887 ] 00:16:00.887 } 00:16:00.887 [2024-12-10 11:23:27.829443] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:16:00.887 [2024-12-10 11:23:27.829551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70644 ] 00:16:01.147 [2024-12-10 11:23:28.009363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.147 [2024-12-10 11:23:28.123685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.406 Running I/O for 5 seconds... 00:16:03.722 47187.00 IOPS, 184.32 MiB/s [2024-12-10T11:23:31.773Z] 46016.50 IOPS, 179.75 MiB/s [2024-12-10T11:23:32.716Z] 45642.33 IOPS, 178.29 MiB/s [2024-12-10T11:23:33.653Z] 45487.50 IOPS, 177.69 MiB/s 00:16:06.539 Latency(us) 00:16:06.539 [2024-12-10T11:23:33.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.540 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:06.540 xnvme_bdev : 5.00 45411.46 177.39 0.00 0.00 1405.79 145.58 2947.80 00:16:06.540 [2024-12-10T11:23:33.654Z] =================================================================================================================== 00:16:06.540 [2024-12-10T11:23:33.654Z] Total : 45411.46 177.39 0.00 0.00 1405.79 145.58 2947.80 00:16:07.476 00:16:07.476 real 0m13.692s 00:16:07.476 user 0m4.953s 00:16:07.476 sys 0m5.852s 00:16:07.476 11:23:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.476 11:23:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:07.476 ************************************ 00:16:07.476 END TEST xnvme_bdevperf 00:16:07.476 ************************************ 00:16:07.735 11:23:34 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:07.735 11:23:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:07.735 11:23:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.735 11:23:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:07.735 ************************************ 00:16:07.735 START TEST xnvme_fio_plugin 00:16:07.735 ************************************ 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:07.735 11:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:07.735 { 00:16:07.735 "subsystems": [ 00:16:07.735 { 00:16:07.735 "subsystem": "bdev", 00:16:07.735 "config": [ 00:16:07.735 { 00:16:07.735 "params": { 00:16:07.735 "io_mechanism": "libaio", 00:16:07.735 "conserve_cpu": false, 00:16:07.735 "filename": "/dev/nvme0n1", 00:16:07.735 "name": "xnvme_bdev" 00:16:07.735 }, 00:16:07.735 "method": "bdev_xnvme_create" 00:16:07.735 }, 00:16:07.735 { 00:16:07.735 "method": "bdev_wait_for_examine" 00:16:07.735 } 00:16:07.735 ] 00:16:07.735 } 00:16:07.735 ] 00:16:07.735 } 00:16:07.994 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:07.994 fio-3.35 00:16:07.994 Starting 1 thread 00:16:14.599 00:16:14.599 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70769: Tue Dec 10 11:23:40 2024 00:16:14.599 read: IOPS=52.7k, BW=206MiB/s (216MB/s)(1029MiB/5001msec) 00:16:14.599 slat (usec): min=4, max=805, avg=16.46, stdev=24.41 00:16:14.599 clat (usec): min=32, max=5628, avg=731.77, stdev=446.12 00:16:14.599 lat (usec): min=89, max=5676, avg=748.23, stdev=448.87 00:16:14.599 clat percentiles (usec): 00:16:14.599 | 1.00th=[ 161], 5.00th=[ 243], 10.00th=[ 306], 20.00th=[ 408], 00:16:14.599 | 30.00th=[ 498], 40.00th=[ 578], 50.00th=[ 668], 60.00th=[ 750], 00:16:14.599 | 70.00th=[ 848], 80.00th=[ 963], 90.00th=[ 1139], 95.00th=[ 1369], 00:16:14.599 | 99.00th=[ 2671], 99.50th=[ 3228], 99.90th=[ 4113], 99.95th=[ 4424], 00:16:14.599 | 99.99th=[ 4883] 00:16:14.599 bw ( KiB/s): min=192384, max=242448, per=100.00%, avg=213229.78, stdev=15379.98, samples=9 00:16:14.599 iops : min=48096, max=60612, avg=53307.44, stdev=3845.00, samples=9 00:16:14.599 lat (usec) : 50=0.01%, 100=0.07%, 250=5.46%, 500=24.97%, 750=29.47% 00:16:14.599 lat (usec) : 1000=22.78% 00:16:14.599 lat (msec) : 2=15.20%, 4=1.92%, 10=0.14% 00:16:14.599 cpu : usr=27.92%, sys=53.48%, ctx=161, majf=0, minf=764 00:16:14.599 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=9.8%, 16=25.4%, 32=58.4%, >=64=1.9% 00:16:14.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.599 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:16:14.599 issued rwts: total=263335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.599 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:14.599 00:16:14.599 Run status group 0 (all jobs): 00:16:14.599 READ: bw=206MiB/s (216MB/s), 206MiB/s-206MiB/s (216MB/s-216MB/s), io=1029MiB (1079MB), run=5001-5001msec 00:16:15.167 ----------------------------------------------------- 00:16:15.167 Suppressions used: 00:16:15.167 count bytes template 00:16:15.167 1 11 /usr/src/fio/parse.c 00:16:15.167 1 8 libtcmalloc_minimal.so 00:16:15.167 1 904 libcrypto.so 00:16:15.167 ----------------------------------------------------- 00:16:15.167 00:16:15.167 11:23:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:15.168 { 00:16:15.168 "subsystems": [ 00:16:15.168 { 00:16:15.168 "subsystem": "bdev", 00:16:15.168 "config": [ 00:16:15.168 { 00:16:15.168 "params": { 00:16:15.168 "io_mechanism": "libaio", 00:16:15.168 "conserve_cpu": false, 00:16:15.168 "filename": "/dev/nvme0n1", 00:16:15.168 "name": "xnvme_bdev" 00:16:15.168 }, 00:16:15.168 "method": "bdev_xnvme_create" 00:16:15.168 }, 00:16:15.168 { 00:16:15.168 "method": "bdev_wait_for_examine" 00:16:15.168 } 00:16:15.168 ] 00:16:15.168 } 00:16:15.168 ] 00:16:15.168 } 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:15.168 11:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:15.168 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:15.168 fio-3.35 00:16:15.168 Starting 1 thread 00:16:21.739 00:16:21.739 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70866: Tue Dec 10 11:23:48 2024 00:16:21.739 write: IOPS=50.6k, BW=197MiB/s (207MB/s)(988MiB/5001msec); 0 zone resets 00:16:21.739 slat (usec): min=4, max=1196, avg=17.30, stdev=25.99 00:16:21.739 clat (usec): min=83, max=5361, avg=746.37, stdev=404.18 00:16:21.739 lat (usec): min=134, max=5437, avg=763.67, stdev=405.05 00:16:21.739 clat percentiles (usec): 00:16:21.739 | 1.00th=[ 165], 5.00th=[ 239], 10.00th=[ 293], 20.00th=[ 412], 00:16:21.739 | 30.00th=[ 506], 40.00th=[ 603], 50.00th=[ 701], 60.00th=[ 799], 00:16:21.740 | 70.00th=[ 906], 80.00th=[ 1029], 90.00th=[ 1205], 95.00th=[ 1369], 00:16:21.740 | 99.00th=[ 2040], 99.50th=[ 2540], 99.90th=[ 3818], 99.95th=[ 4146], 00:16:21.740 | 99.99th=[ 4686] 00:16:21.740 bw ( KiB/s): min=179176, max=233592, per=100.00%, avg=204936.89, stdev=17134.12, samples=9 00:16:21.740 iops : min=44794, max=58398, avg=51234.22, stdev=4283.53, samples=9 00:16:21.740 lat (usec) : 100=0.11%, 250=5.86%, 500=23.21%, 750=26.00%, 1000=22.93% 00:16:21.740 lat (msec) : 2=20.79%, 4=1.02%, 10=0.07% 00:16:21.740 cpu : usr=26.20%, sys=58.04%, ctx=34, majf=0, minf=765 00:16:21.740 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=11.2%, 16=26.1%, 32=56.1%, >=64=1.8% 00:16:21.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.740 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:16:21.740 issued rwts: total=0,252801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.740 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:21.740 00:16:21.740 Run status group 0 (all jobs): 00:16:21.740 WRITE: bw=197MiB/s (207MB/s), 197MiB/s-197MiB/s (207MB/s-207MB/s), io=988MiB (1035MB), run=5001-5001msec 00:16:22.308 ----------------------------------------------------- 00:16:22.308 Suppressions used: 00:16:22.308 count bytes template 00:16:22.308 1 11 /usr/src/fio/parse.c 00:16:22.308 1 8 libtcmalloc_minimal.so 00:16:22.308 1 904 libcrypto.so 00:16:22.308 ----------------------------------------------------- 00:16:22.308 00:16:22.308 00:16:22.308 real 0m14.727s 00:16:22.308 user 0m6.347s 00:16:22.308 sys 0m6.339s 00:16:22.308 11:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.308 ************************************ 00:16:22.308 END TEST xnvme_fio_plugin 00:16:22.308 ************************************ 00:16:22.308 11:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:22.577 11:23:49 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:22.577 11:23:49 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:22.578 11:23:49 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:22.578 11:23:49 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:22.578 11:23:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:22.578 11:23:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.578 11:23:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:22.578 ************************************ 00:16:22.578 START TEST xnvme_rpc 00:16:22.578 ************************************ 00:16:22.578 11:23:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:22.578 11:23:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:22.578 11:23:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:22.578 11:23:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:22.578 11:23:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:22.578 11:23:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70952 00:16:22.578 11:23:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.578 11:23:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70952 00:16:22.578 11:23:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70952 ']' 00:16:22.578 11:23:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.578 11:23:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.578 11:23:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.578 11:23:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.578 11:23:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.578 [2024-12-10 11:23:49.563516] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:16:22.578 [2024-12-10 11:23:49.564217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70952 ] 00:16:22.850 [2024-12-10 11:23:49.746847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.850 [2024-12-10 11:23:49.856036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.788 xnvme_bdev 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70952 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70952 ']' 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70952 00:16:23.788 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:24.047 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.047 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70952 00:16:24.047 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.047 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.047 killing process with pid 70952 00:16:24.047 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70952' 00:16:24.047 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70952 00:16:24.047 11:23:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70952 00:16:26.584 00:16:26.584 real 0m3.824s 00:16:26.584 user 0m3.891s 00:16:26.584 sys 0m0.521s 00:16:26.584 11:23:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.584 11:23:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.584 ************************************ 00:16:26.584 END TEST xnvme_rpc 00:16:26.584 ************************************ 00:16:26.584 11:23:53 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:26.584 11:23:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:26.584 11:23:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.584 11:23:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.584 ************************************ 00:16:26.584 START TEST xnvme_bdevperf 00:16:26.584 ************************************ 00:16:26.584 11:23:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:26.584 11:23:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:26.584 11:23:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:16:26.584 11:23:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:26.584 11:23:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:26.584 11:23:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:26.584 11:23:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:26.584 11:23:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:26.584 { 00:16:26.584 "subsystems": [ 00:16:26.584 { 00:16:26.584 "subsystem": "bdev", 00:16:26.584 "config": [ 00:16:26.584 { 00:16:26.584 "params": { 00:16:26.584 "io_mechanism": "libaio", 00:16:26.584 "conserve_cpu": true, 00:16:26.584 "filename": "/dev/nvme0n1", 00:16:26.584 "name": "xnvme_bdev" 00:16:26.584 }, 00:16:26.584 "method": "bdev_xnvme_create" 00:16:26.584 }, 00:16:26.584 { 00:16:26.584 "method": "bdev_wait_for_examine" 00:16:26.584 } 00:16:26.584 ] 00:16:26.584 } 00:16:26.584 ] 00:16:26.584 } 00:16:26.584 [2024-12-10 11:23:53.441428] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:16:26.584 [2024-12-10 11:23:53.441540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71032 ] 00:16:26.584 [2024-12-10 11:23:53.623364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.843 [2024-12-10 11:23:53.740294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.102 Running I/O for 5 seconds... 00:16:28.976 44230.00 IOPS, 172.77 MiB/s [2024-12-10T11:23:57.488Z] 44025.50 IOPS, 171.97 MiB/s [2024-12-10T11:23:58.441Z] 43909.67 IOPS, 171.52 MiB/s [2024-12-10T11:23:59.377Z] 43952.00 IOPS, 171.69 MiB/s 00:16:32.263 Latency(us) 00:16:32.263 [2024-12-10T11:23:59.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.263 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:32.263 xnvme_bdev : 5.00 44001.30 171.88 0.00 0.00 1451.21 142.29 6027.21 00:16:32.263 [2024-12-10T11:23:59.377Z] =================================================================================================================== 00:16:32.263 [2024-12-10T11:23:59.377Z] Total : 44001.30 171.88 0.00 0.00 1451.21 142.29 6027.21 00:16:33.199 11:24:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:33.199 11:24:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:33.199 11:24:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:33.199 11:24:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:33.199 11:24:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:33.199 { 00:16:33.199 "subsystems": [ 00:16:33.199 { 00:16:33.199 "subsystem": "bdev", 00:16:33.199 "config": [ 00:16:33.199 { 00:16:33.199 "params": { 00:16:33.199 "io_mechanism": "libaio", 00:16:33.199 "conserve_cpu": true, 00:16:33.199 "filename": "/dev/nvme0n1", 00:16:33.199 "name": "xnvme_bdev" 00:16:33.199 }, 00:16:33.199 "method": "bdev_xnvme_create" 00:16:33.199 }, 00:16:33.199 { 00:16:33.199 "method": "bdev_wait_for_examine" 00:16:33.199 } 00:16:33.199 ] 00:16:33.199 } 00:16:33.199 ] 00:16:33.199 } 00:16:33.199 [2024-12-10 11:24:00.307734] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:16:33.199 [2024-12-10 11:24:00.307863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71114 ] 00:16:33.458 [2024-12-10 11:24:00.486604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.717 [2024-12-10 11:24:00.599694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.976 Running I/O for 5 seconds... 00:16:36.289 45160.00 IOPS, 176.41 MiB/s [2024-12-10T11:24:03.970Z] 44845.00 IOPS, 175.18 MiB/s [2024-12-10T11:24:05.346Z] 44912.67 IOPS, 175.44 MiB/s [2024-12-10T11:24:06.289Z] 44775.00 IOPS, 174.90 MiB/s 00:16:39.175 Latency(us) 00:16:39.175 [2024-12-10T11:24:06.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.175 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:39.175 xnvme_bdev : 5.00 45634.24 178.26 0.00 0.00 1398.99 136.53 4579.62 00:16:39.175 [2024-12-10T11:24:06.289Z] =================================================================================================================== 00:16:39.175 [2024-12-10T11:24:06.289Z] Total : 45634.24 178.26 0.00 0.00 1398.99 136.53 4579.62 00:16:40.111 00:16:40.111 real 0m13.739s 00:16:40.111 user 0m4.953s 00:16:40.111 sys 0m5.828s 00:16:40.111 11:24:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.111 11:24:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:40.111 ************************************ 00:16:40.111 END TEST xnvme_bdevperf 00:16:40.111 ************************************ 00:16:40.111 11:24:07 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:40.111 11:24:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:40.111 11:24:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.111 11:24:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:40.111 ************************************ 00:16:40.111 START TEST xnvme_fio_plugin 00:16:40.111 ************************************ 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:40.111 11:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:40.111 { 00:16:40.111 "subsystems": [ 00:16:40.111 { 00:16:40.111 "subsystem": "bdev", 00:16:40.111 "config": [ 00:16:40.111 { 00:16:40.111 "params": { 00:16:40.111 "io_mechanism": "libaio", 00:16:40.111 "conserve_cpu": true, 00:16:40.111 "filename": "/dev/nvme0n1", 00:16:40.111 "name": "xnvme_bdev" 00:16:40.111 }, 00:16:40.111 "method": "bdev_xnvme_create" 00:16:40.111 }, 00:16:40.111 { 00:16:40.111 "method": "bdev_wait_for_examine" 00:16:40.111 } 00:16:40.111 ] 00:16:40.111 } 00:16:40.111 ] 00:16:40.111 } 00:16:40.378 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:40.378 fio-3.35 00:16:40.378 Starting 1 thread 00:16:46.947 00:16:46.947 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71234: Tue Dec 10 11:24:13 2024 00:16:46.947 read: IOPS=54.9k, BW=214MiB/s (225MB/s)(1072MiB/5001msec) 00:16:46.947 slat (usec): min=4, max=2695, avg=15.68, stdev=22.85 00:16:46.947 clat (usec): min=84, max=5717, avg=718.49, stdev=417.78 00:16:46.947 lat (usec): min=129, max=5733, avg=734.17, stdev=419.72 00:16:46.947 clat percentiles (usec): 00:16:46.947 | 1.00th=[ 163], 5.00th=[ 245], 10.00th=[ 314], 20.00th=[ 424], 00:16:46.947 | 30.00th=[ 502], 40.00th=[ 578], 50.00th=[ 652], 60.00th=[ 725], 00:16:46.947 | 70.00th=[ 816], 80.00th=[ 938], 90.00th=[ 1123], 95.00th=[ 1336], 00:16:46.947 | 99.00th=[ 2409], 99.50th=[ 3064], 99.90th=[ 3949], 99.95th=[ 4228], 00:16:46.947 | 99.99th=[ 4752] 00:16:46.947 bw ( KiB/s): min=200712, max=248320, per=100.00%, avg=221023.11, stdev=18206.45, samples=9 00:16:46.947 iops : min=50178, max=62080, avg=55255.78, stdev=4551.61, samples=9 00:16:46.947 lat (usec) : 100=0.05%, 250=5.28%, 500=24.30%, 750=32.93%, 1000=21.31% 00:16:46.947 lat (msec) : 2=14.41%, 4=1.63%, 10=0.09% 00:16:46.947 cpu : usr=26.76%, sys=54.82%, ctx=96, majf=0, minf=764 00:16:46.947 IO depths : 1=0.1%, 2=1.0%, 4=3.4%, 8=9.5%, 16=24.7%, 32=59.3%, >=64=2.0% 00:16:46.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.947 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:16:46.947 issued rwts: total=274489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.947 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:46.947 00:16:46.947 Run status group 0 (all jobs): 00:16:46.947 READ: bw=214MiB/s (225MB/s), 214MiB/s-214MiB/s (225MB/s-225MB/s), io=1072MiB (1124MB), run=5001-5001msec 00:16:47.515 ----------------------------------------------------- 00:16:47.515 Suppressions used: 00:16:47.515 count bytes template 00:16:47.515 1 11 /usr/src/fio/parse.c 00:16:47.515 1 8 libtcmalloc_minimal.so 00:16:47.515 1 904 libcrypto.so 00:16:47.515 ----------------------------------------------------- 00:16:47.515 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:47.515 11:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:47.515 { 00:16:47.515 "subsystems": [ 00:16:47.515 { 00:16:47.515 "subsystem": "bdev", 00:16:47.515 "config": [ 00:16:47.515 { 00:16:47.515 "params": { 00:16:47.515 "io_mechanism": "libaio", 00:16:47.515 "conserve_cpu": true, 00:16:47.515 "filename": "/dev/nvme0n1", 00:16:47.515 "name": "xnvme_bdev" 00:16:47.515 }, 00:16:47.515 "method": "bdev_xnvme_create" 00:16:47.515 }, 00:16:47.515 { 00:16:47.515 "method": "bdev_wait_for_examine" 00:16:47.515 } 00:16:47.515 ] 00:16:47.515 } 00:16:47.515 ] 00:16:47.515 } 00:16:47.774 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:47.774 fio-3.35 00:16:47.774 Starting 1 thread 00:16:54.343 00:16:54.343 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71336: Tue Dec 10 11:24:20 2024 00:16:54.343 write: IOPS=51.0k, BW=199MiB/s (209MB/s)(996MiB/5001msec); 0 zone resets 00:16:54.343 slat (usec): min=4, max=1023, avg=16.56, stdev=26.37 00:16:54.343 clat (usec): min=58, max=6801, avg=783.81, stdev=479.13 00:16:54.343 lat (usec): min=128, max=7088, avg=800.37, stdev=482.23 00:16:54.343 clat percentiles (usec): 00:16:54.343 | 1.00th=[ 186], 5.00th=[ 273], 10.00th=[ 347], 20.00th=[ 461], 00:16:54.343 | 30.00th=[ 553], 40.00th=[ 635], 50.00th=[ 709], 60.00th=[ 791], 00:16:54.343 | 70.00th=[ 881], 80.00th=[ 996], 90.00th=[ 1188], 95.00th=[ 1467], 00:16:54.343 | 99.00th=[ 2933], 99.50th=[ 3556], 99.90th=[ 4555], 99.95th=[ 4948], 00:16:54.343 | 99.99th=[ 6652] 00:16:54.343 bw ( KiB/s): min=183056, max=227184, per=100.00%, avg=204083.22, stdev=16855.73, samples=9 00:16:54.343 iops : min=45764, max=56796, avg=51020.78, stdev=4213.96, samples=9 00:16:54.343 lat (usec) : 100=0.03%, 250=3.62%, 500=20.30%, 750=31.14%, 1000=25.02% 00:16:54.343 lat (msec) : 2=17.49%, 4=2.11%, 10=0.28% 00:16:54.343 cpu : usr=30.08%, sys=51.90%, ctx=48, majf=0, minf=765 00:16:54.343 IO depths : 1=0.1%, 2=0.9%, 4=3.3%, 8=9.6%, 16=24.7%, 32=59.5%, >=64=2.0% 00:16:54.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.343 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:16:54.343 issued rwts: total=0,254879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.343 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:54.343 00:16:54.343 Run status group 0 (all jobs): 00:16:54.343 WRITE: bw=199MiB/s (209MB/s), 199MiB/s-199MiB/s (209MB/s-209MB/s), io=996MiB (1044MB), run=5001-5001msec 00:16:54.913 ----------------------------------------------------- 00:16:54.913 Suppressions used: 00:16:54.913 count bytes template 00:16:54.913 1 11 /usr/src/fio/parse.c 00:16:54.913 1 8 libtcmalloc_minimal.so 00:16:54.913 1 904 libcrypto.so 00:16:54.913 ----------------------------------------------------- 00:16:54.913 00:16:54.913 00:16:54.913 real 0m14.743s 00:16:54.913 user 0m6.527s 00:16:54.913 sys 0m6.073s 00:16:54.913 11:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.913 11:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:54.913 ************************************ 00:16:54.913 END TEST xnvme_fio_plugin 00:16:54.913 ************************************ 00:16:54.913 11:24:21 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:54.913 11:24:21 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:54.913 11:24:21 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:16:54.913 11:24:21 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:16:54.913 11:24:21 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:54.913 11:24:21 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:54.913 11:24:21 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:54.913 11:24:21 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:54.913 11:24:21 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:54.913 11:24:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:54.913 11:24:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.913 11:24:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:54.913 ************************************ 00:16:54.913 START TEST xnvme_rpc 00:16:54.913 ************************************ 00:16:54.913 11:24:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:54.913 11:24:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:54.913 11:24:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:54.913 11:24:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:54.913 11:24:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:54.913 11:24:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71422 00:16:54.913 11:24:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.913 11:24:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71422 00:16:54.913 11:24:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71422 ']' 00:16:54.913 11:24:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.913 11:24:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.913 11:24:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.913 11:24:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.913 11:24:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.172 [2024-12-10 11:24:22.082966] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:16:55.172 [2024-12-10 11:24:22.083095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71422 ] 00:16:55.172 [2024-12-10 11:24:22.258265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.432 [2024-12-10 11:24:22.364597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.369 xnvme_bdev 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71422 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71422 ']' 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71422 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71422 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:56.369 killing process with pid 71422 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71422' 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71422 00:16:56.369 11:24:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71422 00:16:58.903 00:16:58.903 real 0m3.814s 00:16:58.903 user 0m3.869s 00:16:58.903 sys 0m0.542s 00:16:58.903 11:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.903 11:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.903 ************************************ 00:16:58.903 END TEST xnvme_rpc 00:16:58.903 ************************************ 00:16:58.903 11:24:25 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:58.903 11:24:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:58.903 11:24:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.903 11:24:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.903 ************************************ 00:16:58.903 START TEST xnvme_bdevperf 00:16:58.903 ************************************ 00:16:58.903 11:24:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:58.903 11:24:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:58.903 11:24:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:16:58.903 11:24:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:58.903 11:24:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:58.903 11:24:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:58.903 11:24:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:58.903 11:24:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:58.903 { 00:16:58.903 "subsystems": [ 00:16:58.903 { 00:16:58.903 "subsystem": "bdev", 00:16:58.903 "config": [ 00:16:58.903 { 00:16:58.903 "params": { 00:16:58.903 "io_mechanism": "io_uring", 00:16:58.903 "conserve_cpu": false, 00:16:58.903 "filename": "/dev/nvme0n1", 00:16:58.903 "name": "xnvme_bdev" 00:16:58.903 }, 00:16:58.903 "method": "bdev_xnvme_create" 00:16:58.903 }, 00:16:58.903 { 00:16:58.903 "method": "bdev_wait_for_examine" 00:16:58.903 } 00:16:58.903 ] 00:16:58.903 } 00:16:58.903 ] 00:16:58.903 } 00:16:58.903 [2024-12-10 11:24:25.964638] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:16:58.903 [2024-12-10 11:24:25.964755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71502 ] 00:16:59.162 [2024-12-10 11:24:26.143857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.162 [2024-12-10 11:24:26.248393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.730 Running I/O for 5 seconds... 00:17:01.619 39104.00 IOPS, 152.75 MiB/s [2024-12-10T11:24:29.670Z] 36696.50 IOPS, 143.35 MiB/s [2024-12-10T11:24:30.611Z] 35655.33 IOPS, 139.28 MiB/s [2024-12-10T11:24:31.990Z] 34853.00 IOPS, 136.14 MiB/s 00:17:04.876 Latency(us) 00:17:04.876 [2024-12-10T11:24:31.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.876 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:04.876 xnvme_bdev : 5.00 35164.74 137.36 0.00 0.00 1815.04 618.51 11212.18 00:17:04.876 [2024-12-10T11:24:31.990Z] =================================================================================================================== 00:17:04.876 [2024-12-10T11:24:31.990Z] Total : 35164.74 137.36 0.00 0.00 1815.04 618.51 11212.18 00:17:05.813 11:24:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:05.813 11:24:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:05.813 11:24:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:05.813 11:24:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:05.813 11:24:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:05.813 { 00:17:05.813 "subsystems": [ 00:17:05.813 { 00:17:05.813 "subsystem": "bdev", 00:17:05.813 "config": [ 00:17:05.813 { 00:17:05.813 "params": { 00:17:05.813 "io_mechanism": "io_uring", 00:17:05.813 "conserve_cpu": false, 00:17:05.813 "filename": "/dev/nvme0n1", 00:17:05.813 "name": "xnvme_bdev" 00:17:05.813 }, 00:17:05.813 "method": "bdev_xnvme_create" 00:17:05.813 }, 00:17:05.813 { 00:17:05.813 "method": "bdev_wait_for_examine" 00:17:05.813 } 00:17:05.813 ] 00:17:05.813 } 00:17:05.813 ] 00:17:05.813 } 00:17:05.813 [2024-12-10 11:24:32.775155] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:17:05.813 [2024-12-10 11:24:32.775274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71577 ] 00:17:06.073 [2024-12-10 11:24:32.956674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.073 [2024-12-10 11:24:33.063731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.331 Running I/O for 5 seconds... 00:17:08.645 29120.00 IOPS, 113.75 MiB/s [2024-12-10T11:24:36.695Z] 28992.00 IOPS, 113.25 MiB/s [2024-12-10T11:24:37.632Z] 28864.00 IOPS, 112.75 MiB/s [2024-12-10T11:24:38.569Z] 28688.00 IOPS, 112.06 MiB/s [2024-12-10T11:24:38.569Z] 28480.00 IOPS, 111.25 MiB/s 00:17:11.455 Latency(us) 00:17:11.455 [2024-12-10T11:24:38.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.455 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:11.455 xnvme_bdev : 5.01 28436.69 111.08 0.00 0.00 2244.01 1506.80 7843.26 00:17:11.455 [2024-12-10T11:24:38.569Z] =================================================================================================================== 00:17:11.455 [2024-12-10T11:24:38.569Z] Total : 28436.69 111.08 0.00 0.00 2244.01 1506.80 7843.26 00:17:12.832 00:17:12.832 real 0m13.639s 00:17:12.832 user 0m6.388s 00:17:12.832 sys 0m7.039s 00:17:12.832 11:24:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.832 11:24:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:12.832 ************************************ 00:17:12.832 END TEST xnvme_bdevperf 00:17:12.832 ************************************ 00:17:12.832 11:24:39 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:12.832 11:24:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:12.832 11:24:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.832 11:24:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:12.832 ************************************ 00:17:12.832 START TEST xnvme_fio_plugin 00:17:12.832 ************************************ 00:17:12.832 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:12.832 11:24:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:12.832 11:24:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:12.833 11:24:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:12.833 { 00:17:12.833 "subsystems": [ 00:17:12.833 { 00:17:12.833 "subsystem": "bdev", 00:17:12.833 "config": [ 00:17:12.833 { 00:17:12.833 "params": { 00:17:12.833 "io_mechanism": "io_uring", 00:17:12.833 "conserve_cpu": false, 00:17:12.833 "filename": "/dev/nvme0n1", 00:17:12.833 "name": "xnvme_bdev" 00:17:12.833 }, 00:17:12.833 "method": "bdev_xnvme_create" 00:17:12.833 }, 00:17:12.833 { 00:17:12.833 "method": "bdev_wait_for_examine" 00:17:12.833 } 00:17:12.833 ] 00:17:12.833 } 00:17:12.833 ] 00:17:12.833 } 00:17:12.833 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:12.833 fio-3.35 00:17:12.833 Starting 1 thread 00:17:19.474 00:17:19.474 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71702: Tue Dec 10 11:24:45 2024 00:17:19.474 read: IOPS=25.0k, BW=97.7MiB/s (102MB/s)(489MiB/5002msec) 00:17:19.474 slat (nsec): min=2422, max=74800, avg=7492.52, stdev=3389.27 00:17:19.474 clat (usec): min=974, max=6507, avg=2260.47, stdev=366.02 00:17:19.474 lat (usec): min=977, max=6518, avg=2267.96, stdev=367.47 00:17:19.474 clat percentiles (usec): 00:17:19.474 | 1.00th=[ 1237], 5.00th=[ 1582], 10.00th=[ 1795], 20.00th=[ 2008], 00:17:19.474 | 30.00th=[ 2147], 40.00th=[ 2212], 50.00th=[ 2311], 60.00th=[ 2376], 00:17:19.474 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 2671], 95.00th=[ 2737], 00:17:19.474 | 99.00th=[ 2868], 99.50th=[ 2900], 99.90th=[ 5473], 99.95th=[ 6063], 00:17:19.474 | 99.99th=[ 6390] 00:17:19.474 bw ( KiB/s): min=91136, max=116224, per=100.00%, avg=100329.22, stdev=7690.53, samples=9 00:17:19.474 iops : min=22784, max=29056, avg=25082.22, stdev=1922.61, samples=9 00:17:19.474 lat (usec) : 1000=0.01% 00:17:19.474 lat (msec) : 2=19.72%, 4=80.17%, 10=0.10% 00:17:19.474 cpu : usr=35.27%, sys=63.19%, ctx=14, majf=0, minf=762 00:17:19.474 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:19.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.474 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:19.474 issued rwts: total=125056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.474 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:19.474 00:17:19.474 Run status group 0 (all jobs): 00:17:19.474 READ: bw=97.7MiB/s (102MB/s), 97.7MiB/s-97.7MiB/s (102MB/s-102MB/s), io=489MiB (512MB), run=5002-5002msec 00:17:20.043 ----------------------------------------------------- 00:17:20.043 Suppressions used: 00:17:20.043 count bytes template 00:17:20.043 1 11 /usr/src/fio/parse.c 00:17:20.043 1 8 libtcmalloc_minimal.so 00:17:20.043 1 904 libcrypto.so 00:17:20.043 ----------------------------------------------------- 00:17:20.043 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:20.043 11:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:20.043 { 00:17:20.043 "subsystems": [ 00:17:20.043 { 00:17:20.043 "subsystem": "bdev", 00:17:20.043 "config": [ 00:17:20.043 { 00:17:20.043 "params": { 00:17:20.043 "io_mechanism": "io_uring", 00:17:20.043 "conserve_cpu": false, 00:17:20.043 "filename": "/dev/nvme0n1", 00:17:20.043 "name": "xnvme_bdev" 00:17:20.043 }, 00:17:20.043 "method": "bdev_xnvme_create" 00:17:20.043 }, 00:17:20.043 { 00:17:20.043 "method": "bdev_wait_for_examine" 00:17:20.043 } 00:17:20.043 ] 00:17:20.043 } 00:17:20.043 ] 00:17:20.043 } 00:17:20.043 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:20.043 fio-3.35 00:17:20.043 Starting 1 thread 00:17:26.644 00:17:26.644 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71799: Tue Dec 10 11:24:52 2024 00:17:26.644 write: IOPS=24.9k, BW=97.2MiB/s (102MB/s)(486MiB/5002msec); 0 zone resets 00:17:26.644 slat (usec): min=2, max=333, avg= 7.58, stdev= 3.66 00:17:26.644 clat (usec): min=1048, max=4530, avg=2266.52, stdev=363.95 00:17:26.644 lat (usec): min=1051, max=4541, avg=2274.10, stdev=365.32 00:17:26.644 clat percentiles (usec): 00:17:26.644 | 1.00th=[ 1254], 5.00th=[ 1532], 10.00th=[ 1811], 20.00th=[ 2024], 00:17:26.644 | 30.00th=[ 2147], 40.00th=[ 2212], 50.00th=[ 2311], 60.00th=[ 2376], 00:17:26.644 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 2671], 95.00th=[ 2769], 00:17:26.644 | 99.00th=[ 3032], 99.50th=[ 3261], 99.90th=[ 3916], 99.95th=[ 4146], 00:17:26.644 | 99.99th=[ 4359] 00:17:26.644 bw ( KiB/s): min=92176, max=118509, per=100.00%, avg=99563.40, stdev=7492.49, samples=10 00:17:26.644 iops : min=23044, max=29627, avg=24891.00, stdev=1872.67, samples=10 00:17:26.644 lat (msec) : 2=18.87%, 4=81.05%, 10=0.08% 00:17:26.644 cpu : usr=35.81%, sys=62.41%, ctx=29, majf=0, minf=763 00:17:26.644 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=24.9%, 32=50.1%, >=64=1.6% 00:17:26.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.644 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:26.644 issued rwts: total=0,124468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.644 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:26.644 00:17:26.644 Run status group 0 (all jobs): 00:17:26.644 WRITE: bw=97.2MiB/s (102MB/s), 97.2MiB/s-97.2MiB/s (102MB/s-102MB/s), io=486MiB (510MB), run=5002-5002msec 00:17:27.212 ----------------------------------------------------- 00:17:27.212 Suppressions used: 00:17:27.212 count bytes template 00:17:27.212 1 11 /usr/src/fio/parse.c 00:17:27.212 1 8 libtcmalloc_minimal.so 00:17:27.212 1 904 libcrypto.so 00:17:27.212 ----------------------------------------------------- 00:17:27.212 00:17:27.212 00:17:27.212 real 0m14.624s 00:17:27.212 user 0m7.297s 00:17:27.212 sys 0m6.915s 00:17:27.212 11:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.212 11:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:27.212 ************************************ 00:17:27.212 END TEST xnvme_fio_plugin 00:17:27.212 ************************************ 00:17:27.212 11:24:54 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:27.212 11:24:54 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:27.212 11:24:54 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:27.212 11:24:54 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:27.212 11:24:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:27.212 11:24:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.212 11:24:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:27.212 ************************************ 00:17:27.212 START TEST xnvme_rpc 00:17:27.212 ************************************ 00:17:27.212 11:24:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:27.212 11:24:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:27.212 11:24:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:27.212 11:24:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:27.212 11:24:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:27.212 11:24:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71885 00:17:27.212 11:24:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:27.212 11:24:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71885 00:17:27.212 11:24:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71885 ']' 00:17:27.212 11:24:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.212 11:24:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.212 11:24:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.213 11:24:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.213 11:24:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.471 [2024-12-10 11:24:54.380136] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:17:27.471 [2024-12-10 11:24:54.380278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71885 ] 00:17:27.471 [2024-12-10 11:24:54.545708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.730 [2024-12-10 11:24:54.654837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.666 xnvme_bdev 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71885 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71885 ']' 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71885 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71885 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.666 killing process with pid 71885 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71885' 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71885 00:17:28.666 11:24:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71885 00:17:31.251 00:17:31.251 real 0m3.800s 00:17:31.251 user 0m3.873s 00:17:31.251 sys 0m0.526s 00:17:31.251 11:24:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.251 11:24:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.251 ************************************ 00:17:31.252 END TEST xnvme_rpc 00:17:31.252 ************************************ 00:17:31.252 11:24:58 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:31.252 11:24:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:31.252 11:24:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.252 11:24:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:31.252 ************************************ 00:17:31.252 START TEST xnvme_bdevperf 00:17:31.252 ************************************ 00:17:31.252 11:24:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:31.252 11:24:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:31.252 11:24:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:17:31.252 11:24:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:31.252 11:24:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:31.252 11:24:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:31.252 11:24:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:31.252 11:24:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:31.252 { 00:17:31.252 "subsystems": [ 00:17:31.252 { 00:17:31.252 "subsystem": "bdev", 00:17:31.252 "config": [ 00:17:31.252 { 00:17:31.252 "params": { 00:17:31.252 "io_mechanism": "io_uring", 00:17:31.252 "conserve_cpu": true, 00:17:31.252 "filename": "/dev/nvme0n1", 00:17:31.252 "name": "xnvme_bdev" 00:17:31.252 }, 00:17:31.252 "method": "bdev_xnvme_create" 00:17:31.252 }, 00:17:31.252 { 00:17:31.252 "method": "bdev_wait_for_examine" 00:17:31.252 } 00:17:31.252 ] 00:17:31.252 } 00:17:31.252 ] 00:17:31.252 } 00:17:31.252 [2024-12-10 11:24:58.240328] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:17:31.252 [2024-12-10 11:24:58.240586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71966 ] 00:17:31.511 [2024-12-10 11:24:58.421174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.511 [2024-12-10 11:24:58.520404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.773 Running I/O for 5 seconds... 00:17:34.088 31808.00 IOPS, 124.25 MiB/s [2024-12-10T11:25:02.139Z] 28736.00 IOPS, 112.25 MiB/s [2024-12-10T11:25:03.076Z] 29738.67 IOPS, 116.17 MiB/s [2024-12-10T11:25:04.013Z] 28720.00 IOPS, 112.19 MiB/s 00:17:36.899 Latency(us) 00:17:36.899 [2024-12-10T11:25:04.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.899 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:36.899 xnvme_bdev : 5.00 28565.91 111.59 0.00 0.00 2233.95 894.87 8106.46 00:17:36.899 [2024-12-10T11:25:04.013Z] =================================================================================================================== 00:17:36.899 [2024-12-10T11:25:04.013Z] Total : 28565.91 111.59 0.00 0.00 2233.95 894.87 8106.46 00:17:37.836 11:25:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:37.836 11:25:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:37.836 11:25:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:37.836 11:25:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:37.836 11:25:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:38.096 { 00:17:38.096 "subsystems": [ 00:17:38.096 { 00:17:38.096 "subsystem": "bdev", 00:17:38.096 "config": [ 00:17:38.096 { 00:17:38.096 "params": { 00:17:38.096 "io_mechanism": "io_uring", 00:17:38.096 "conserve_cpu": true, 00:17:38.096 "filename": "/dev/nvme0n1", 00:17:38.096 "name": "xnvme_bdev" 00:17:38.096 }, 00:17:38.096 "method": "bdev_xnvme_create" 00:17:38.096 }, 00:17:38.096 { 00:17:38.096 "method": "bdev_wait_for_examine" 00:17:38.096 } 00:17:38.096 ] 00:17:38.096 } 00:17:38.096 ] 00:17:38.096 } 00:17:38.096 [2024-12-10 11:25:05.024255] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:17:38.096 [2024-12-10 11:25:05.024524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72047 ] 00:17:38.096 [2024-12-10 11:25:05.204492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.355 [2024-12-10 11:25:05.311962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.614 Running I/O for 5 seconds... 00:17:40.929 25344.00 IOPS, 99.00 MiB/s [2024-12-10T11:25:08.979Z] 25248.00 IOPS, 98.62 MiB/s [2024-12-10T11:25:09.917Z] 24682.67 IOPS, 96.42 MiB/s [2024-12-10T11:25:10.855Z] 24656.00 IOPS, 96.31 MiB/s 00:17:43.741 Latency(us) 00:17:43.741 [2024-12-10T11:25:10.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.741 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:43.741 xnvme_bdev : 5.00 24579.20 96.01 0.00 0.00 2595.37 1223.87 8264.38 00:17:43.741 [2024-12-10T11:25:10.855Z] =================================================================================================================== 00:17:43.741 [2024-12-10T11:25:10.855Z] Total : 24579.20 96.01 0.00 0.00 2595.37 1223.87 8264.38 00:17:44.730 ************************************ 00:17:44.730 END TEST xnvme_bdevperf 00:17:44.730 ************************************ 00:17:44.730 00:17:44.730 real 0m13.587s 00:17:44.730 user 0m7.307s 00:17:44.730 sys 0m5.725s 00:17:44.730 11:25:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.730 11:25:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:44.730 11:25:11 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:44.730 11:25:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:44.730 11:25:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.730 11:25:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:44.730 ************************************ 00:17:44.730 START TEST xnvme_fio_plugin 00:17:44.730 ************************************ 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:44.730 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:44.989 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:44.989 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:44.989 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:44.989 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:44.989 11:25:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:44.989 { 00:17:44.989 "subsystems": [ 00:17:44.989 { 00:17:44.989 "subsystem": "bdev", 00:17:44.989 "config": [ 00:17:44.989 { 00:17:44.989 "params": { 00:17:44.989 "io_mechanism": "io_uring", 00:17:44.989 "conserve_cpu": true, 00:17:44.989 "filename": "/dev/nvme0n1", 00:17:44.989 "name": "xnvme_bdev" 00:17:44.989 }, 00:17:44.989 "method": "bdev_xnvme_create" 00:17:44.989 }, 00:17:44.989 { 00:17:44.990 "method": "bdev_wait_for_examine" 00:17:44.990 } 00:17:44.990 ] 00:17:44.990 } 00:17:44.990 ] 00:17:44.990 } 00:17:44.990 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:44.990 fio-3.35 00:17:44.990 Starting 1 thread 00:17:51.555 00:17:51.555 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72166: Tue Dec 10 11:25:17 2024 00:17:51.555 read: IOPS=23.9k, BW=93.2MiB/s (97.8MB/s)(466MiB/5001msec) 00:17:51.555 slat (nsec): min=2844, max=75573, avg=8191.43, stdev=3333.80 00:17:51.555 clat (usec): min=1358, max=4997, avg=2357.23, stdev=276.03 00:17:51.555 lat (usec): min=1362, max=5005, avg=2365.42, stdev=277.17 00:17:51.555 clat percentiles (usec): 00:17:51.555 | 1.00th=[ 1598], 5.00th=[ 1827], 10.00th=[ 2008], 20.00th=[ 2147], 00:17:51.555 | 30.00th=[ 2245], 40.00th=[ 2311], 50.00th=[ 2376], 60.00th=[ 2442], 00:17:51.555 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 2671], 95.00th=[ 2737], 00:17:51.555 | 99.00th=[ 2835], 99.50th=[ 2868], 99.90th=[ 2999], 99.95th=[ 4621], 00:17:51.555 | 99.99th=[ 4948] 00:17:51.555 bw ( KiB/s): min=91136, max=100864, per=100.00%, avg=95836.67, stdev=3459.92, samples=9 00:17:51.555 iops : min=22784, max=25216, avg=23959.11, stdev=865.00, samples=9 00:17:51.555 lat (msec) : 2=10.01%, 4=89.94%, 10=0.05% 00:17:51.555 cpu : usr=39.10%, sys=55.92%, ctx=14, majf=0, minf=762 00:17:51.555 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:51.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.555 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:51.555 issued rwts: total=119360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:51.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:51.555 00:17:51.555 Run status group 0 (all jobs): 00:17:51.555 READ: bw=93.2MiB/s (97.8MB/s), 93.2MiB/s-93.2MiB/s (97.8MB/s-97.8MB/s), io=466MiB (489MB), run=5001-5001msec 00:17:52.123 ----------------------------------------------------- 00:17:52.123 Suppressions used: 00:17:52.123 count bytes template 00:17:52.123 1 11 /usr/src/fio/parse.c 00:17:52.123 1 8 libtcmalloc_minimal.so 00:17:52.123 1 904 libcrypto.so 00:17:52.123 ----------------------------------------------------- 00:17:52.123 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:52.123 { 00:17:52.123 "subsystems": [ 00:17:52.123 { 00:17:52.123 "subsystem": "bdev", 00:17:52.123 "config": [ 00:17:52.123 { 00:17:52.123 "params": { 00:17:52.123 "io_mechanism": "io_uring", 00:17:52.123 "conserve_cpu": true, 00:17:52.123 "filename": "/dev/nvme0n1", 00:17:52.123 "name": "xnvme_bdev" 00:17:52.123 }, 00:17:52.123 "method": "bdev_xnvme_create" 00:17:52.123 }, 00:17:52.123 { 00:17:52.123 "method": "bdev_wait_for_examine" 00:17:52.123 } 00:17:52.123 ] 00:17:52.123 } 00:17:52.123 ] 00:17:52.123 } 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:52.123 11:25:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:52.382 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:52.382 fio-3.35 00:17:52.382 Starting 1 thread 00:17:58.975 00:17:58.975 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72258: Tue Dec 10 11:25:25 2024 00:17:58.975 write: IOPS=24.7k, BW=96.7MiB/s (101MB/s)(484MiB/5002msec); 0 zone resets 00:17:58.975 slat (usec): min=2, max=100, avg= 7.59, stdev= 3.31 00:17:58.975 clat (usec): min=1298, max=8071, avg=2281.19, stdev=361.56 00:17:58.975 lat (usec): min=1303, max=8095, avg=2288.78, stdev=362.90 00:17:58.975 clat percentiles (usec): 00:17:58.975 | 1.00th=[ 1500], 5.00th=[ 1680], 10.00th=[ 1795], 20.00th=[ 1991], 00:17:58.975 | 30.00th=[ 2114], 40.00th=[ 2212], 50.00th=[ 2311], 60.00th=[ 2376], 00:17:58.975 | 70.00th=[ 2474], 80.00th=[ 2573], 90.00th=[ 2704], 95.00th=[ 2769], 00:17:58.975 | 99.00th=[ 2868], 99.50th=[ 2933], 99.90th=[ 5211], 99.95th=[ 7570], 00:17:58.975 | 99.99th=[ 7963] 00:17:58.975 bw ( KiB/s): min=92672, max=115712, per=100.00%, avg=98986.67, stdev=7388.61, samples=9 00:17:58.975 iops : min=23168, max=28928, avg=24746.67, stdev=1847.15, samples=9 00:17:58.975 lat (msec) : 2=21.01%, 4=78.84%, 10=0.16% 00:17:58.975 cpu : usr=43.67%, sys=51.89%, ctx=34, majf=0, minf=763 00:17:58.975 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:58.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.975 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:58.975 issued rwts: total=0,123776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.975 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:58.975 00:17:58.975 Run status group 0 (all jobs): 00:17:58.975 WRITE: bw=96.7MiB/s (101MB/s), 96.7MiB/s-96.7MiB/s (101MB/s-101MB/s), io=484MiB (507MB), run=5002-5002msec 00:17:59.544 ----------------------------------------------------- 00:17:59.544 Suppressions used: 00:17:59.544 count bytes template 00:17:59.544 1 11 /usr/src/fio/parse.c 00:17:59.544 1 8 libtcmalloc_minimal.so 00:17:59.544 1 904 libcrypto.so 00:17:59.544 ----------------------------------------------------- 00:17:59.544 00:17:59.544 00:17:59.544 real 0m14.591s 00:17:59.544 user 0m7.821s 00:17:59.544 sys 0m6.030s 00:17:59.544 11:25:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.544 11:25:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:59.544 ************************************ 00:17:59.544 END TEST xnvme_fio_plugin 00:17:59.544 ************************************ 00:17:59.544 11:25:26 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:17:59.544 11:25:26 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:17:59.544 11:25:26 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:17:59.544 11:25:26 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:17:59.544 11:25:26 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:17:59.544 11:25:26 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:59.544 11:25:26 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:17:59.544 11:25:26 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:17:59.544 11:25:26 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:59.544 11:25:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:59.544 11:25:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.544 11:25:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:59.544 ************************************ 00:17:59.544 START TEST xnvme_rpc 00:17:59.544 ************************************ 00:17:59.544 11:25:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:59.544 11:25:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:59.544 11:25:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:59.544 11:25:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:59.544 11:25:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:59.544 11:25:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72348 00:17:59.544 11:25:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.544 11:25:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72348 00:17:59.544 11:25:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72348 ']' 00:17:59.544 11:25:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.544 11:25:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.544 11:25:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.544 11:25:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.544 11:25:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.544 [2024-12-10 11:25:26.577902] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:17:59.544 [2024-12-10 11:25:26.578614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72348 ] 00:17:59.804 [2024-12-10 11:25:26.759892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.804 [2024-12-10 11:25:26.862047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.741 xnvme_bdev 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:18:00.741 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:00.742 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:00.742 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.742 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.742 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:01.000 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.000 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:18:01.000 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:01.000 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72348 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72348 ']' 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72348 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72348 00:18:01.001 killing process with pid 72348 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72348' 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72348 00:18:01.001 11:25:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72348 00:18:03.534 00:18:03.534 real 0m3.852s 00:18:03.534 user 0m3.910s 00:18:03.534 sys 0m0.548s 00:18:03.534 11:25:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.534 ************************************ 00:18:03.534 END TEST xnvme_rpc 00:18:03.534 ************************************ 00:18:03.534 11:25:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.534 11:25:30 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:03.534 11:25:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:03.534 11:25:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.534 11:25:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:03.534 ************************************ 00:18:03.534 START TEST xnvme_bdevperf 00:18:03.534 ************************************ 00:18:03.534 11:25:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:03.534 11:25:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:03.534 11:25:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:18:03.534 11:25:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:03.534 11:25:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:03.534 11:25:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:03.534 11:25:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:03.534 11:25:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:03.534 { 00:18:03.534 "subsystems": [ 00:18:03.534 { 00:18:03.534 "subsystem": "bdev", 00:18:03.534 "config": [ 00:18:03.534 { 00:18:03.534 "params": { 00:18:03.534 "io_mechanism": "io_uring_cmd", 00:18:03.534 "conserve_cpu": false, 00:18:03.534 "filename": "/dev/ng0n1", 00:18:03.534 "name": "xnvme_bdev" 00:18:03.534 }, 00:18:03.534 "method": "bdev_xnvme_create" 00:18:03.534 }, 00:18:03.534 { 00:18:03.534 "method": "bdev_wait_for_examine" 00:18:03.534 } 00:18:03.534 ] 00:18:03.534 } 00:18:03.534 ] 00:18:03.534 } 00:18:03.534 [2024-12-10 11:25:30.481824] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:03.534 [2024-12-10 11:25:30.482093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72433 ] 00:18:03.793 [2024-12-10 11:25:30.660486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.793 [2024-12-10 11:25:30.767858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.052 Running I/O for 5 seconds... 00:18:06.018 26240.00 IOPS, 102.50 MiB/s [2024-12-10T11:25:34.513Z] 28128.00 IOPS, 109.88 MiB/s [2024-12-10T11:25:35.451Z] 28714.67 IOPS, 112.17 MiB/s [2024-12-10T11:25:36.389Z] 27888.00 IOPS, 108.94 MiB/s 00:18:09.275 Latency(us) 00:18:09.275 [2024-12-10T11:25:36.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.276 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:09.276 xnvme_bdev : 5.01 27552.74 107.63 0.00 0.00 2315.66 940.93 8001.18 00:18:09.276 [2024-12-10T11:25:36.390Z] =================================================================================================================== 00:18:09.276 [2024-12-10T11:25:36.390Z] Total : 27552.74 107.63 0.00 0.00 2315.66 940.93 8001.18 00:18:10.214 11:25:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:10.214 11:25:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:10.214 11:25:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:10.214 11:25:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:10.214 11:25:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:10.214 { 00:18:10.214 "subsystems": [ 00:18:10.214 { 00:18:10.214 "subsystem": "bdev", 00:18:10.214 "config": [ 00:18:10.214 { 00:18:10.214 "params": { 00:18:10.214 "io_mechanism": "io_uring_cmd", 00:18:10.214 "conserve_cpu": false, 00:18:10.214 "filename": "/dev/ng0n1", 00:18:10.214 "name": "xnvme_bdev" 00:18:10.214 }, 00:18:10.214 "method": "bdev_xnvme_create" 00:18:10.214 }, 00:18:10.214 { 00:18:10.214 "method": "bdev_wait_for_examine" 00:18:10.214 } 00:18:10.214 ] 00:18:10.214 } 00:18:10.214 ] 00:18:10.214 } 00:18:10.214 [2024-12-10 11:25:37.303282] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:10.214 [2024-12-10 11:25:37.303389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72514 ] 00:18:10.473 [2024-12-10 11:25:37.484583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.733 [2024-12-10 11:25:37.587473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.992 Running I/O for 5 seconds... 00:18:12.868 24576.00 IOPS, 96.00 MiB/s [2024-12-10T11:25:41.360Z] 23968.00 IOPS, 93.62 MiB/s [2024-12-10T11:25:41.927Z] 23936.00 IOPS, 93.50 MiB/s [2024-12-10T11:25:43.328Z] 24144.00 IOPS, 94.31 MiB/s 00:18:16.214 Latency(us) 00:18:16.214 [2024-12-10T11:25:43.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.214 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:16.214 xnvme_bdev : 5.00 24032.02 93.88 0.00 0.00 2654.22 1276.50 7843.26 00:18:16.214 [2024-12-10T11:25:43.328Z] =================================================================================================================== 00:18:16.214 [2024-12-10T11:25:43.328Z] Total : 24032.02 93.88 0.00 0.00 2654.22 1276.50 7843.26 00:18:17.166 11:25:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:17.166 11:25:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:17.166 11:25:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:17.166 11:25:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:17.166 11:25:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:17.167 { 00:18:17.167 "subsystems": [ 00:18:17.167 { 00:18:17.167 "subsystem": "bdev", 00:18:17.167 "config": [ 00:18:17.167 { 00:18:17.167 "params": { 00:18:17.167 "io_mechanism": "io_uring_cmd", 00:18:17.167 "conserve_cpu": false, 00:18:17.167 "filename": "/dev/ng0n1", 00:18:17.167 "name": "xnvme_bdev" 00:18:17.167 }, 00:18:17.167 "method": "bdev_xnvme_create" 00:18:17.167 }, 00:18:17.167 { 00:18:17.167 "method": "bdev_wait_for_examine" 00:18:17.167 } 00:18:17.167 ] 00:18:17.167 } 00:18:17.167 ] 00:18:17.167 } 00:18:17.167 [2024-12-10 11:25:44.091178] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:17.167 [2024-12-10 11:25:44.091287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72591 ] 00:18:17.167 [2024-12-10 11:25:44.272737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.426 [2024-12-10 11:25:44.379589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.685 Running I/O for 5 seconds... 00:18:20.002 70912.00 IOPS, 277.00 MiB/s [2024-12-10T11:25:48.053Z] 71680.00 IOPS, 280.00 MiB/s [2024-12-10T11:25:48.991Z] 71872.00 IOPS, 280.75 MiB/s [2024-12-10T11:25:49.928Z] 71968.00 IOPS, 281.12 MiB/s [2024-12-10T11:25:49.928Z] 72051.20 IOPS, 281.45 MiB/s 00:18:22.814 Latency(us) 00:18:22.814 [2024-12-10T11:25:49.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.814 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:18:22.814 xnvme_bdev : 5.00 72040.69 281.41 0.00 0.00 885.65 651.41 6106.17 00:18:22.814 [2024-12-10T11:25:49.928Z] =================================================================================================================== 00:18:22.814 [2024-12-10T11:25:49.928Z] Total : 72040.69 281.41 0.00 0.00 885.65 651.41 6106.17 00:18:23.751 11:25:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:23.751 11:25:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:18:23.751 11:25:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:23.751 11:25:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:23.751 11:25:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:23.751 { 00:18:23.751 "subsystems": [ 00:18:23.751 { 00:18:23.751 "subsystem": "bdev", 00:18:23.751 "config": [ 00:18:23.751 { 00:18:23.751 "params": { 00:18:23.751 "io_mechanism": "io_uring_cmd", 00:18:23.751 "conserve_cpu": false, 00:18:23.751 "filename": "/dev/ng0n1", 00:18:23.751 "name": "xnvme_bdev" 00:18:23.751 }, 00:18:23.751 "method": "bdev_xnvme_create" 00:18:23.751 }, 00:18:23.751 { 00:18:23.751 "method": "bdev_wait_for_examine" 00:18:23.751 } 00:18:23.751 ] 00:18:23.751 } 00:18:23.751 ] 00:18:23.751 } 00:18:24.010 [2024-12-10 11:25:50.894149] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:24.010 [2024-12-10 11:25:50.894269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72674 ] 00:18:24.011 [2024-12-10 11:25:51.074449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.270 [2024-12-10 11:25:51.181467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.529 Running I/O for 5 seconds... 00:18:26.845 33485.00 IOPS, 130.80 MiB/s [2024-12-10T11:25:54.527Z] 33485.50 IOPS, 130.80 MiB/s [2024-12-10T11:25:55.904Z] 33067.33 IOPS, 129.17 MiB/s [2024-12-10T11:25:56.841Z] 32974.25 IOPS, 128.81 MiB/s [2024-12-10T11:25:56.841Z] 34347.60 IOPS, 134.17 MiB/s 00:18:29.727 Latency(us) 00:18:29.727 [2024-12-10T11:25:56.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.727 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:18:29.727 xnvme_bdev : 5.00 34329.39 134.10 0.00 0.00 1857.06 181.77 9843.56 00:18:29.727 [2024-12-10T11:25:56.841Z] =================================================================================================================== 00:18:29.727 [2024-12-10T11:25:56.841Z] Total : 34329.39 134.10 0.00 0.00 1857.06 181.77 9843.56 00:18:30.664 00:18:30.664 real 0m27.224s 00:18:30.664 user 0m14.602s 00:18:30.664 sys 0m12.141s 00:18:30.664 11:25:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.664 ************************************ 00:18:30.664 END TEST xnvme_bdevperf 00:18:30.664 ************************************ 00:18:30.664 11:25:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:30.664 11:25:57 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:30.664 11:25:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:30.664 11:25:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.664 11:25:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:30.664 ************************************ 00:18:30.664 START TEST xnvme_fio_plugin 00:18:30.664 ************************************ 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:30.664 11:25:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:30.664 { 00:18:30.664 "subsystems": [ 00:18:30.664 { 00:18:30.664 "subsystem": "bdev", 00:18:30.664 "config": [ 00:18:30.664 { 00:18:30.664 "params": { 00:18:30.664 "io_mechanism": "io_uring_cmd", 00:18:30.664 "conserve_cpu": false, 00:18:30.664 "filename": "/dev/ng0n1", 00:18:30.664 "name": "xnvme_bdev" 00:18:30.664 }, 00:18:30.664 "method": "bdev_xnvme_create" 00:18:30.664 }, 00:18:30.664 { 00:18:30.664 "method": "bdev_wait_for_examine" 00:18:30.664 } 00:18:30.664 ] 00:18:30.664 } 00:18:30.664 ] 00:18:30.664 } 00:18:30.924 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:30.924 fio-3.35 00:18:30.924 Starting 1 thread 00:18:37.493 00:18:37.493 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72793: Tue Dec 10 11:26:03 2024 00:18:37.493 read: IOPS=24.7k, BW=96.6MiB/s (101MB/s)(483MiB/5002msec) 00:18:37.493 slat (nsec): min=2360, max=81386, avg=7913.44, stdev=3297.91 00:18:37.493 clat (usec): min=1023, max=7985, avg=2275.87, stdev=331.91 00:18:37.493 lat (usec): min=1025, max=8011, avg=2283.78, stdev=333.05 00:18:37.493 clat percentiles (usec): 00:18:37.493 | 1.00th=[ 1254], 5.00th=[ 1696], 10.00th=[ 1909], 20.00th=[ 2073], 00:18:37.493 | 30.00th=[ 2147], 40.00th=[ 2245], 50.00th=[ 2311], 60.00th=[ 2376], 00:18:37.493 | 70.00th=[ 2442], 80.00th=[ 2540], 90.00th=[ 2638], 95.00th=[ 2671], 00:18:37.493 | 99.00th=[ 2802], 99.50th=[ 2933], 99.90th=[ 3818], 99.95th=[ 7439], 00:18:37.493 | 99.99th=[ 7898] 00:18:37.493 bw ( KiB/s): min=93184, max=116224, per=99.76%, avg=98645.33, stdev=7222.65, samples=9 00:18:37.493 iops : min=23296, max=29056, avg=24661.33, stdev=1805.66, samples=9 00:18:37.493 lat (msec) : 2=14.25%, 4=85.67%, 10=0.08% 00:18:37.493 cpu : usr=39.15%, sys=59.39%, ctx=6, majf=0, minf=762 00:18:37.493 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:37.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.493 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:37.493 issued rwts: total=123648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:37.493 00:18:37.493 Run status group 0 (all jobs): 00:18:37.493 READ: bw=96.6MiB/s (101MB/s), 96.6MiB/s-96.6MiB/s (101MB/s-101MB/s), io=483MiB (506MB), run=5002-5002msec 00:18:38.061 ----------------------------------------------------- 00:18:38.061 Suppressions used: 00:18:38.061 count bytes template 00:18:38.061 1 11 /usr/src/fio/parse.c 00:18:38.061 1 8 libtcmalloc_minimal.so 00:18:38.061 1 904 libcrypto.so 00:18:38.061 ----------------------------------------------------- 00:18:38.061 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:38.061 { 00:18:38.061 "subsystems": [ 00:18:38.061 { 00:18:38.061 "subsystem": "bdev", 00:18:38.061 "config": [ 00:18:38.061 { 00:18:38.061 "params": { 00:18:38.061 "io_mechanism": "io_uring_cmd", 00:18:38.061 "conserve_cpu": false, 00:18:38.061 "filename": "/dev/ng0n1", 00:18:38.061 "name": "xnvme_bdev" 00:18:38.061 }, 00:18:38.061 "method": "bdev_xnvme_create" 00:18:38.061 }, 00:18:38.061 { 00:18:38.061 "method": "bdev_wait_for_examine" 00:18:38.061 } 00:18:38.061 ] 00:18:38.061 } 00:18:38.061 ] 00:18:38.061 } 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:38.061 11:26:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:38.320 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:38.320 fio-3.35 00:18:38.320 Starting 1 thread 00:18:44.905 00:18:44.905 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72890: Tue Dec 10 11:26:11 2024 00:18:44.905 write: IOPS=27.3k, BW=107MiB/s (112MB/s)(534MiB/5002msec); 0 zone resets 00:18:44.905 slat (usec): min=2, max=114, avg= 7.03, stdev= 3.19 00:18:44.905 clat (usec): min=975, max=4085, avg=2064.95, stdev=388.81 00:18:44.905 lat (usec): min=978, max=4112, avg=2071.98, stdev=390.45 00:18:44.905 clat percentiles (usec): 00:18:44.905 | 1.00th=[ 1123], 5.00th=[ 1303], 10.00th=[ 1516], 20.00th=[ 1745], 00:18:44.905 | 30.00th=[ 1893], 40.00th=[ 2008], 50.00th=[ 2114], 60.00th=[ 2212], 00:18:44.905 | 70.00th=[ 2311], 80.00th=[ 2409], 90.00th=[ 2540], 95.00th=[ 2638], 00:18:44.905 | 99.00th=[ 2737], 99.50th=[ 2835], 99.90th=[ 3261], 99.95th=[ 3556], 00:18:44.905 | 99.99th=[ 3949] 00:18:44.905 bw ( KiB/s): min=94720, max=124416, per=99.60%, avg=108828.44, stdev=10079.09, samples=9 00:18:44.905 iops : min=23680, max=31104, avg=27207.11, stdev=2519.77, samples=9 00:18:44.905 lat (usec) : 1000=0.03% 00:18:44.905 lat (msec) : 2=39.38%, 4=60.59%, 10=0.01% 00:18:44.905 cpu : usr=36.97%, sys=61.71%, ctx=10, majf=0, minf=763 00:18:44.905 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:44.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.906 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:44.906 issued rwts: total=0,136640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:44.906 00:18:44.906 Run status group 0 (all jobs): 00:18:44.906 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=534MiB (560MB), run=5002-5002msec 00:18:45.164 ----------------------------------------------------- 00:18:45.164 Suppressions used: 00:18:45.164 count bytes template 00:18:45.164 1 11 /usr/src/fio/parse.c 00:18:45.164 1 8 libtcmalloc_minimal.so 00:18:45.164 1 904 libcrypto.so 00:18:45.164 ----------------------------------------------------- 00:18:45.164 00:18:45.423 ************************************ 00:18:45.423 END TEST xnvme_fio_plugin 00:18:45.423 ************************************ 00:18:45.423 00:18:45.423 real 0m14.610s 00:18:45.423 user 0m7.538s 00:18:45.423 sys 0m6.687s 00:18:45.423 11:26:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.423 11:26:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:45.423 11:26:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:45.423 11:26:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:45.423 11:26:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:45.423 11:26:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:45.423 11:26:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:45.423 11:26:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.423 11:26:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:45.423 ************************************ 00:18:45.423 START TEST xnvme_rpc 00:18:45.423 ************************************ 00:18:45.423 11:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:45.423 11:26:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:45.423 11:26:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:45.423 11:26:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:45.423 11:26:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:45.423 11:26:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72974 00:18:45.423 11:26:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.423 11:26:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72974 00:18:45.423 11:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72974 ']' 00:18:45.423 11:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.423 11:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.423 11:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.423 11:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.423 11:26:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:45.423 [2024-12-10 11:26:12.485239] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:45.423 [2024-12-10 11:26:12.485541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72974 ] 00:18:45.682 [2024-12-10 11:26:12.658045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.682 [2024-12-10 11:26:12.756525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.619 xnvme_bdev 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.619 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72974 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72974 ']' 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72974 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72974 00:18:46.877 killing process with pid 72974 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72974' 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72974 00:18:46.877 11:26:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72974 00:18:49.412 00:18:49.412 real 0m3.767s 00:18:49.412 user 0m3.845s 00:18:49.412 sys 0m0.513s 00:18:49.412 11:26:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.412 ************************************ 00:18:49.412 END TEST xnvme_rpc 00:18:49.412 ************************************ 00:18:49.412 11:26:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:49.412 11:26:16 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:49.412 11:26:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:49.412 11:26:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.412 11:26:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:49.412 ************************************ 00:18:49.412 START TEST xnvme_bdevperf 00:18:49.412 ************************************ 00:18:49.412 11:26:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:49.412 11:26:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:49.412 11:26:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:18:49.412 11:26:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:49.412 11:26:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:49.412 11:26:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:49.412 11:26:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:49.412 11:26:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:49.412 { 00:18:49.412 "subsystems": [ 00:18:49.412 { 00:18:49.412 "subsystem": "bdev", 00:18:49.412 "config": [ 00:18:49.412 { 00:18:49.412 "params": { 00:18:49.412 "io_mechanism": "io_uring_cmd", 00:18:49.412 "conserve_cpu": true, 00:18:49.412 "filename": "/dev/ng0n1", 00:18:49.412 "name": "xnvme_bdev" 00:18:49.412 }, 00:18:49.412 "method": "bdev_xnvme_create" 00:18:49.412 }, 00:18:49.412 { 00:18:49.412 "method": "bdev_wait_for_examine" 00:18:49.412 } 00:18:49.412 ] 00:18:49.412 } 00:18:49.412 ] 00:18:49.412 } 00:18:49.412 [2024-12-10 11:26:16.306952] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:49.412 [2024-12-10 11:26:16.307061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73058 ] 00:18:49.412 [2024-12-10 11:26:16.487739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.670 [2024-12-10 11:26:16.592477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.969 Running I/O for 5 seconds... 00:18:51.842 25152.00 IOPS, 98.25 MiB/s [2024-12-10T11:26:20.333Z] 24832.00 IOPS, 97.00 MiB/s [2024-12-10T11:26:21.271Z] 24789.33 IOPS, 96.83 MiB/s [2024-12-10T11:26:22.206Z] 25120.00 IOPS, 98.12 MiB/s [2024-12-10T11:26:22.206Z] 24896.00 IOPS, 97.25 MiB/s 00:18:55.093 Latency(us) 00:18:55.093 [2024-12-10T11:26:22.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.093 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:55.093 xnvme_bdev : 5.01 24857.64 97.10 0.00 0.00 2566.52 1112.01 8001.18 00:18:55.093 [2024-12-10T11:26:22.207Z] =================================================================================================================== 00:18:55.093 [2024-12-10T11:26:22.207Z] Total : 24857.64 97.10 0.00 0.00 2566.52 1112.01 8001.18 00:18:56.029 11:26:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:56.029 11:26:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:56.029 11:26:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:56.029 11:26:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:56.029 11:26:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:56.029 { 00:18:56.029 "subsystems": [ 00:18:56.029 { 00:18:56.029 "subsystem": "bdev", 00:18:56.029 "config": [ 00:18:56.029 { 00:18:56.029 "params": { 00:18:56.029 "io_mechanism": "io_uring_cmd", 00:18:56.029 "conserve_cpu": true, 00:18:56.029 "filename": "/dev/ng0n1", 00:18:56.029 "name": "xnvme_bdev" 00:18:56.029 }, 00:18:56.029 "method": "bdev_xnvme_create" 00:18:56.029 }, 00:18:56.029 { 00:18:56.029 "method": "bdev_wait_for_examine" 00:18:56.029 } 00:18:56.029 ] 00:18:56.029 } 00:18:56.029 ] 00:18:56.029 } 00:18:56.029 [2024-12-10 11:26:23.111788] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:56.029 [2024-12-10 11:26:23.111900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73132 ] 00:18:56.288 [2024-12-10 11:26:23.292590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.547 [2024-12-10 11:26:23.407593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.805 Running I/O for 5 seconds... 00:18:58.677 27968.00 IOPS, 109.25 MiB/s [2024-12-10T11:26:27.170Z] 28256.00 IOPS, 110.38 MiB/s [2024-12-10T11:26:28.106Z] 28821.33 IOPS, 112.58 MiB/s [2024-12-10T11:26:29.101Z] 29136.00 IOPS, 113.81 MiB/s 00:19:01.987 Latency(us) 00:19:01.987 [2024-12-10T11:26:29.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.987 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:01.987 xnvme_bdev : 5.00 28110.89 109.81 0.00 0.00 2269.39 888.29 7685.35 00:19:01.987 [2024-12-10T11:26:29.101Z] =================================================================================================================== 00:19:01.987 [2024-12-10T11:26:29.101Z] Total : 28110.89 109.81 0.00 0.00 2269.39 888.29 7685.35 00:19:02.923 11:26:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:02.923 11:26:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:19:02.923 11:26:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:02.923 11:26:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:02.923 11:26:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:02.923 { 00:19:02.923 "subsystems": [ 00:19:02.923 { 00:19:02.923 "subsystem": "bdev", 00:19:02.923 "config": [ 00:19:02.923 { 00:19:02.923 "params": { 00:19:02.923 "io_mechanism": "io_uring_cmd", 00:19:02.923 "conserve_cpu": true, 00:19:02.923 "filename": "/dev/ng0n1", 00:19:02.923 "name": "xnvme_bdev" 00:19:02.923 }, 00:19:02.923 "method": "bdev_xnvme_create" 00:19:02.923 }, 00:19:02.923 { 00:19:02.923 "method": "bdev_wait_for_examine" 00:19:02.923 } 00:19:02.923 ] 00:19:02.923 } 00:19:02.923 ] 00:19:02.923 } 00:19:02.923 [2024-12-10 11:26:29.931670] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:02.923 [2024-12-10 11:26:29.931780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73212 ] 00:19:03.182 [2024-12-10 11:26:30.108393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.182 [2024-12-10 11:26:30.210880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.750 Running I/O for 5 seconds... 00:19:05.624 73024.00 IOPS, 285.25 MiB/s [2024-12-10T11:26:33.693Z] 73024.00 IOPS, 285.25 MiB/s [2024-12-10T11:26:34.629Z] 73024.00 IOPS, 285.25 MiB/s [2024-12-10T11:26:35.567Z] 72976.00 IOPS, 285.06 MiB/s 00:19:08.453 Latency(us) 00:19:08.453 [2024-12-10T11:26:35.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.453 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:19:08.453 xnvme_bdev : 5.00 72943.12 284.93 0.00 0.00 874.84 644.83 2381.93 00:19:08.453 [2024-12-10T11:26:35.567Z] =================================================================================================================== 00:19:08.453 [2024-12-10T11:26:35.567Z] Total : 72943.12 284.93 0.00 0.00 874.84 644.83 2381.93 00:19:09.830 11:26:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:09.830 11:26:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:19:09.830 11:26:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:09.830 11:26:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:09.830 11:26:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:09.830 { 00:19:09.830 "subsystems": [ 00:19:09.830 { 00:19:09.830 "subsystem": "bdev", 00:19:09.830 "config": [ 00:19:09.830 { 00:19:09.830 "params": { 00:19:09.830 "io_mechanism": "io_uring_cmd", 00:19:09.830 "conserve_cpu": true, 00:19:09.830 "filename": "/dev/ng0n1", 00:19:09.830 "name": "xnvme_bdev" 00:19:09.830 }, 00:19:09.830 "method": "bdev_xnvme_create" 00:19:09.830 }, 00:19:09.830 { 00:19:09.830 "method": "bdev_wait_for_examine" 00:19:09.830 } 00:19:09.830 ] 00:19:09.830 } 00:19:09.830 ] 00:19:09.830 } 00:19:09.830 [2024-12-10 11:26:36.707947] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:09.830 [2024-12-10 11:26:36.708069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73286 ] 00:19:09.830 [2024-12-10 11:26:36.887016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.089 [2024-12-10 11:26:36.992989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.348 Running I/O for 5 seconds... 00:19:12.221 50709.00 IOPS, 198.08 MiB/s [2024-12-10T11:26:40.712Z] 56246.50 IOPS, 219.71 MiB/s [2024-12-10T11:26:41.645Z] 51942.33 IOPS, 202.90 MiB/s [2024-12-10T11:26:42.580Z] 51712.50 IOPS, 202.00 MiB/s 00:19:15.466 Latency(us) 00:19:15.466 [2024-12-10T11:26:42.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.466 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:15.466 xnvme_bdev : 5.00 50932.84 198.96 0.00 0.00 1251.61 66.62 22424.37 00:19:15.466 [2024-12-10T11:26:42.580Z] =================================================================================================================== 00:19:15.466 [2024-12-10T11:26:42.580Z] Total : 50932.84 198.96 0.00 0.00 1251.61 66.62 22424.37 00:19:16.404 00:19:16.404 real 0m27.211s 00:19:16.404 user 0m16.770s 00:19:16.404 sys 0m8.616s 00:19:16.404 11:26:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.404 11:26:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:16.404 ************************************ 00:19:16.404 END TEST xnvme_bdevperf 00:19:16.404 ************************************ 00:19:16.404 11:26:43 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:16.404 11:26:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:16.404 11:26:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.404 11:26:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:16.404 ************************************ 00:19:16.404 START TEST xnvme_fio_plugin 00:19:16.404 ************************************ 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:16.404 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:16.663 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:16.663 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:16.663 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:16.663 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:16.663 11:26:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:16.663 { 00:19:16.663 "subsystems": [ 00:19:16.663 { 00:19:16.663 "subsystem": "bdev", 00:19:16.663 "config": [ 00:19:16.663 { 00:19:16.663 "params": { 00:19:16.663 "io_mechanism": "io_uring_cmd", 00:19:16.663 "conserve_cpu": true, 00:19:16.663 "filename": "/dev/ng0n1", 00:19:16.663 "name": "xnvme_bdev" 00:19:16.663 }, 00:19:16.663 "method": "bdev_xnvme_create" 00:19:16.663 }, 00:19:16.663 { 00:19:16.663 "method": "bdev_wait_for_examine" 00:19:16.663 } 00:19:16.663 ] 00:19:16.663 } 00:19:16.663 ] 00:19:16.663 } 00:19:16.663 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:16.663 fio-3.35 00:19:16.663 Starting 1 thread 00:19:23.235 00:19:23.235 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73409: Tue Dec 10 11:26:49 2024 00:19:23.235 read: IOPS=26.5k, BW=104MiB/s (109MB/s)(518MiB/5001msec) 00:19:23.235 slat (usec): min=3, max=122, avg= 6.94, stdev= 2.40 00:19:23.235 clat (usec): min=1530, max=3700, avg=2138.85, stdev=256.02 00:19:23.235 lat (usec): min=1536, max=3726, avg=2145.79, stdev=256.90 00:19:23.235 clat percentiles (usec): 00:19:23.235 | 1.00th=[ 1663], 5.00th=[ 1745], 10.00th=[ 1811], 20.00th=[ 1909], 00:19:23.235 | 30.00th=[ 1991], 40.00th=[ 2057], 50.00th=[ 2114], 60.00th=[ 2180], 00:19:23.235 | 70.00th=[ 2278], 80.00th=[ 2343], 90.00th=[ 2474], 95.00th=[ 2606], 00:19:23.235 | 99.00th=[ 2769], 99.50th=[ 2835], 99.90th=[ 3130], 99.95th=[ 3261], 00:19:23.235 | 99.99th=[ 3589] 00:19:23.235 bw ( KiB/s): min=96768, max=113152, per=99.50%, avg=105585.78, stdev=5601.53, samples=9 00:19:23.235 iops : min=24192, max=28288, avg=26396.44, stdev=1400.38, samples=9 00:19:23.235 lat (msec) : 2=31.87%, 4=68.13% 00:19:23.235 cpu : usr=51.34%, sys=45.84%, ctx=10, majf=0, minf=762 00:19:23.235 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:23.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.235 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:23.235 issued rwts: total=132672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.235 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:23.235 00:19:23.235 Run status group 0 (all jobs): 00:19:23.235 READ: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=518MiB (543MB), run=5001-5001msec 00:19:23.803 ----------------------------------------------------- 00:19:23.803 Suppressions used: 00:19:23.803 count bytes template 00:19:23.803 1 11 /usr/src/fio/parse.c 00:19:23.803 1 8 libtcmalloc_minimal.so 00:19:23.803 1 904 libcrypto.so 00:19:23.803 ----------------------------------------------------- 00:19:23.803 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:23.803 { 00:19:23.803 "subsystems": [ 00:19:23.803 { 00:19:23.803 "subsystem": "bdev", 00:19:23.803 "config": [ 00:19:23.803 { 00:19:23.803 "params": { 00:19:23.803 "io_mechanism": "io_uring_cmd", 00:19:23.803 "conserve_cpu": true, 00:19:23.803 "filename": "/dev/ng0n1", 00:19:23.803 "name": "xnvme_bdev" 00:19:23.803 }, 00:19:23.803 "method": "bdev_xnvme_create" 00:19:23.803 }, 00:19:23.803 { 00:19:23.803 "method": "bdev_wait_for_examine" 00:19:23.803 } 00:19:23.803 ] 00:19:23.803 } 00:19:23.803 ] 00:19:23.803 } 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:23.803 11:26:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:24.062 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:24.062 fio-3.35 00:19:24.062 Starting 1 thread 00:19:30.629 00:19:30.629 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73501: Tue Dec 10 11:26:56 2024 00:19:30.630 write: IOPS=24.9k, BW=97.3MiB/s (102MB/s)(487MiB/5001msec); 0 zone resets 00:19:30.630 slat (usec): min=2, max=109, avg= 7.83, stdev= 3.17 00:19:30.630 clat (usec): min=1033, max=3394, avg=2257.90, stdev=297.50 00:19:30.630 lat (usec): min=1036, max=3434, avg=2265.73, stdev=298.72 00:19:30.630 clat percentiles (usec): 00:19:30.630 | 1.00th=[ 1319], 5.00th=[ 1762], 10.00th=[ 1893], 20.00th=[ 2024], 00:19:30.630 | 30.00th=[ 2114], 40.00th=[ 2212], 50.00th=[ 2278], 60.00th=[ 2343], 00:19:30.630 | 70.00th=[ 2442], 80.00th=[ 2507], 90.00th=[ 2638], 95.00th=[ 2704], 00:19:30.630 | 99.00th=[ 2802], 99.50th=[ 2835], 99.90th=[ 2933], 99.95th=[ 2999], 00:19:30.630 | 99.99th=[ 3261] 00:19:30.630 bw ( KiB/s): min=92672, max=108544, per=100.00%, avg=100750.22, stdev=6137.48, samples=9 00:19:30.630 iops : min=23168, max=27136, avg=25187.56, stdev=1534.37, samples=9 00:19:30.630 lat (msec) : 2=17.82%, 4=82.18% 00:19:30.630 cpu : usr=47.24%, sys=49.40%, ctx=15, majf=0, minf=763 00:19:30.630 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:30.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.630 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:30.630 issued rwts: total=0,124608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.630 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:30.630 00:19:30.630 Run status group 0 (all jobs): 00:19:30.630 WRITE: bw=97.3MiB/s (102MB/s), 97.3MiB/s-97.3MiB/s (102MB/s-102MB/s), io=487MiB (510MB), run=5001-5001msec 00:19:31.198 ----------------------------------------------------- 00:19:31.198 Suppressions used: 00:19:31.198 count bytes template 00:19:31.198 1 11 /usr/src/fio/parse.c 00:19:31.198 1 8 libtcmalloc_minimal.so 00:19:31.198 1 904 libcrypto.so 00:19:31.198 ----------------------------------------------------- 00:19:31.198 00:19:31.198 ************************************ 00:19:31.198 END TEST xnvme_fio_plugin 00:19:31.198 ************************************ 00:19:31.198 00:19:31.198 real 0m14.543s 00:19:31.198 user 0m8.553s 00:19:31.198 sys 0m5.432s 00:19:31.198 11:26:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.198 11:26:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:31.198 11:26:58 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 72974 00:19:31.198 11:26:58 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72974 ']' 00:19:31.198 11:26:58 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 72974 00:19:31.198 Process with pid 72974 is not found 00:19:31.198 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72974) - No such process 00:19:31.198 11:26:58 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 72974 is not found' 00:19:31.198 00:19:31.198 real 3m48.825s 00:19:31.198 user 2m3.630s 00:19:31.198 sys 1m28.234s 00:19:31.198 11:26:58 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.198 11:26:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:31.198 ************************************ 00:19:31.198 END TEST nvme_xnvme 00:19:31.198 ************************************ 00:19:31.198 11:26:58 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:31.198 11:26:58 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:31.198 11:26:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.198 11:26:58 -- common/autotest_common.sh@10 -- # set +x 00:19:31.198 ************************************ 00:19:31.198 START TEST blockdev_xnvme 00:19:31.198 ************************************ 00:19:31.198 11:26:58 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:31.198 * Looking for test storage... 00:19:31.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:31.198 11:26:58 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:31.198 11:26:58 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:19:31.198 11:26:58 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:31.458 11:26:58 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.458 11:26:58 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:19:31.458 11:26:58 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.458 11:26:58 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:31.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.458 --rc genhtml_branch_coverage=1 00:19:31.458 --rc genhtml_function_coverage=1 00:19:31.458 --rc genhtml_legend=1 00:19:31.458 --rc geninfo_all_blocks=1 00:19:31.458 --rc geninfo_unexecuted_blocks=1 00:19:31.458 00:19:31.458 ' 00:19:31.458 11:26:58 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:31.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.458 --rc genhtml_branch_coverage=1 00:19:31.459 --rc genhtml_function_coverage=1 00:19:31.459 --rc genhtml_legend=1 00:19:31.459 --rc geninfo_all_blocks=1 00:19:31.459 --rc geninfo_unexecuted_blocks=1 00:19:31.459 00:19:31.459 ' 00:19:31.459 11:26:58 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:31.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.459 --rc genhtml_branch_coverage=1 00:19:31.459 --rc genhtml_function_coverage=1 00:19:31.459 --rc genhtml_legend=1 00:19:31.459 --rc geninfo_all_blocks=1 00:19:31.459 --rc geninfo_unexecuted_blocks=1 00:19:31.459 00:19:31.459 ' 00:19:31.459 11:26:58 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:31.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.459 --rc genhtml_branch_coverage=1 00:19:31.459 --rc genhtml_function_coverage=1 00:19:31.459 --rc genhtml_legend=1 00:19:31.459 --rc geninfo_all_blocks=1 00:19:31.459 --rc geninfo_unexecuted_blocks=1 00:19:31.459 00:19:31.459 ' 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73641 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:31.459 11:26:58 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73641 00:19:31.459 11:26:58 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73641 ']' 00:19:31.459 11:26:58 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.459 11:26:58 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.459 11:26:58 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.459 11:26:58 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.459 11:26:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:31.459 [2024-12-10 11:26:58.519073] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:31.459 [2024-12-10 11:26:58.519212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73641 ] 00:19:31.718 [2024-12-10 11:26:58.694969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.718 [2024-12-10 11:26:58.798641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.656 11:26:59 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.656 11:26:59 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:19:32.656 11:26:59 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:32.656 11:26:59 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:19:32.656 11:26:59 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:19:32.656 11:26:59 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:19:32.656 11:26:59 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:33.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:34.163 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:19:34.163 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:19:34.163 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:19:34.163 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:19:34.163 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:34.163 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:19:34.164 nvme0n1 00:19:34.164 nvme0n2 00:19:34.164 nvme0n3 00:19:34.164 nvme1n1 00:19:34.164 nvme2n1 00:19:34.164 nvme3n1 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:34.164 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.164 11:27:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:34.424 11:27:01 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.424 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:34.424 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "31fd2e10-e7d9-4c57-8013-d5bdf3bf98a1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "31fd2e10-e7d9-4c57-8013-d5bdf3bf98a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "1f249c5d-1afe-4331-bcae-cba174200989"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1f249c5d-1afe-4331-bcae-cba174200989",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "c029285e-bb5c-4387-a1c3-508c5fa5c54e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c029285e-bb5c-4387-a1c3-508c5fa5c54e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b9721ce4-b1fb-4f78-97aa-441939babf26"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b9721ce4-b1fb-4f78-97aa-441939babf26",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "23b4e6e1-a8c6-414f-9a5e-6ad1e09c5837"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "23b4e6e1-a8c6-414f-9a5e-6ad1e09c5837",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "62258215-fadc-45eb-ab37-edc22ee2bfe1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "62258215-fadc-45eb-ab37-edc22ee2bfe1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:34.424 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:34.424 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:34.424 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:19:34.424 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:34.424 11:27:01 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73641 00:19:34.424 11:27:01 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73641 ']' 00:19:34.424 11:27:01 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73641 00:19:34.424 11:27:01 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:19:34.424 11:27:01 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.424 11:27:01 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73641 00:19:34.424 11:27:01 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.424 11:27:01 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.424 killing process with pid 73641 00:19:34.424 11:27:01 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73641' 00:19:34.424 11:27:01 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73641 00:19:34.424 11:27:01 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73641 00:19:36.958 11:27:03 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:36.958 11:27:03 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:36.958 11:27:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:36.958 11:27:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.958 11:27:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:36.958 ************************************ 00:19:36.958 START TEST bdev_hello_world 00:19:36.958 ************************************ 00:19:36.958 11:27:03 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:36.958 [2024-12-10 11:27:03.819574] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:36.958 [2024-12-10 11:27:03.819684] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73937 ] 00:19:36.958 [2024-12-10 11:27:04.001167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.217 [2024-12-10 11:27:04.112343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.476 [2024-12-10 11:27:04.541760] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:37.476 [2024-12-10 11:27:04.542016] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:19:37.476 [2024-12-10 11:27:04.542045] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:37.476 [2024-12-10 11:27:04.544097] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:37.476 [2024-12-10 11:27:04.544441] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:37.476 [2024-12-10 11:27:04.544469] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:37.476 [2024-12-10 11:27:04.544645] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:37.476 00:19:37.476 [2024-12-10 11:27:04.544669] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:38.853 00:19:38.853 real 0m1.877s 00:19:38.853 user 0m1.530s 00:19:38.853 sys 0m0.232s 00:19:38.853 11:27:05 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.853 11:27:05 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:38.853 ************************************ 00:19:38.853 END TEST bdev_hello_world 00:19:38.853 ************************************ 00:19:38.853 11:27:05 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:38.853 11:27:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:38.853 11:27:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.853 11:27:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:38.853 ************************************ 00:19:38.853 START TEST bdev_bounds 00:19:38.853 ************************************ 00:19:38.853 11:27:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:38.853 11:27:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73974 00:19:38.853 11:27:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:38.853 11:27:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:38.853 11:27:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73974' 00:19:38.853 Process bdevio pid: 73974 00:19:38.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.853 11:27:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73974 00:19:38.853 11:27:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73974 ']' 00:19:38.853 11:27:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.853 11:27:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.853 11:27:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.853 11:27:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.853 11:27:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:38.853 [2024-12-10 11:27:05.774092] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:38.853 [2024-12-10 11:27:05.774216] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73974 ] 00:19:38.853 [2024-12-10 11:27:05.948290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:39.112 [2024-12-10 11:27:06.056343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.112 [2024-12-10 11:27:06.056478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.112 [2024-12-10 11:27:06.056508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.680 11:27:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.680 11:27:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:39.680 11:27:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:39.680 I/O targets: 00:19:39.680 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:39.680 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:39.680 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:39.680 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:19:39.680 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:19:39.680 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:39.680 00:19:39.680 00:19:39.680 CUnit - A unit testing framework for C - Version 2.1-3 00:19:39.680 http://cunit.sourceforge.net/ 00:19:39.680 00:19:39.680 00:19:39.680 Suite: bdevio tests on: nvme3n1 00:19:39.680 Test: blockdev write read block ...passed 00:19:39.680 Test: blockdev write zeroes read block ...passed 00:19:39.680 Test: blockdev write zeroes read no split ...passed 00:19:39.680 Test: blockdev write zeroes read split ...passed 00:19:39.680 Test: blockdev write zeroes read split partial ...passed 00:19:39.680 Test: blockdev reset ...passed 00:19:39.680 Test: blockdev write read 8 blocks ...passed 00:19:39.680 Test: blockdev write read size > 128k ...passed 00:19:39.680 Test: blockdev write read invalid size ...passed 00:19:39.680 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:39.680 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:39.680 Test: blockdev write read max offset ...passed 00:19:39.680 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:39.680 Test: blockdev writev readv 8 blocks ...passed 00:19:39.680 Test: blockdev writev readv 30 x 1block ...passed 00:19:39.680 Test: blockdev writev readv block ...passed 00:19:39.680 Test: blockdev writev readv size > 128k ...passed 00:19:39.680 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:39.680 Test: blockdev comparev and writev ...passed 00:19:39.680 Test: blockdev nvme passthru rw ...passed 00:19:39.680 Test: blockdev nvme passthru vendor specific ...passed 00:19:39.680 Test: blockdev nvme admin passthru ...passed 00:19:39.680 Test: blockdev copy ...passed 00:19:39.680 Suite: bdevio tests on: nvme2n1 00:19:39.680 Test: blockdev write read block ...passed 00:19:39.680 Test: blockdev write zeroes read block ...passed 00:19:39.680 Test: blockdev write zeroes read no split ...passed 00:19:39.939 Test: blockdev write zeroes read split ...passed 00:19:39.939 Test: blockdev write zeroes read split partial ...passed 00:19:39.939 Test: blockdev reset ...passed 00:19:39.939 Test: blockdev write read 8 blocks ...passed 00:19:39.939 Test: blockdev write read size > 128k ...passed 00:19:39.939 Test: blockdev write read invalid size ...passed 00:19:39.939 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:39.939 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:39.939 Test: blockdev write read max offset ...passed 00:19:39.939 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:39.939 Test: blockdev writev readv 8 blocks ...passed 00:19:39.939 Test: blockdev writev readv 30 x 1block ...passed 00:19:39.939 Test: blockdev writev readv block ...passed 00:19:39.939 Test: blockdev writev readv size > 128k ...passed 00:19:39.939 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:39.939 Test: blockdev comparev and writev ...passed 00:19:39.939 Test: blockdev nvme passthru rw ...passed 00:19:39.939 Test: blockdev nvme passthru vendor specific ...passed 00:19:39.939 Test: blockdev nvme admin passthru ...passed 00:19:39.939 Test: blockdev copy ...passed 00:19:39.939 Suite: bdevio tests on: nvme1n1 00:19:39.939 Test: blockdev write read block ...passed 00:19:39.939 Test: blockdev write zeroes read block ...passed 00:19:39.939 Test: blockdev write zeroes read no split ...passed 00:19:39.939 Test: blockdev write zeroes read split ...passed 00:19:39.939 Test: blockdev write zeroes read split partial ...passed 00:19:39.939 Test: blockdev reset ...passed 00:19:39.939 Test: blockdev write read 8 blocks ...passed 00:19:39.939 Test: blockdev write read size > 128k ...passed 00:19:39.939 Test: blockdev write read invalid size ...passed 00:19:39.939 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:39.939 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:39.939 Test: blockdev write read max offset ...passed 00:19:39.939 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:39.939 Test: blockdev writev readv 8 blocks ...passed 00:19:39.939 Test: blockdev writev readv 30 x 1block ...passed 00:19:39.939 Test: blockdev writev readv block ...passed 00:19:39.939 Test: blockdev writev readv size > 128k ...passed 00:19:39.939 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:39.939 Test: blockdev comparev and writev ...passed 00:19:39.939 Test: blockdev nvme passthru rw ...passed 00:19:39.939 Test: blockdev nvme passthru vendor specific ...passed 00:19:39.939 Test: blockdev nvme admin passthru ...passed 00:19:39.939 Test: blockdev copy ...passed 00:19:39.939 Suite: bdevio tests on: nvme0n3 00:19:39.939 Test: blockdev write read block ...passed 00:19:39.939 Test: blockdev write zeroes read block ...passed 00:19:39.939 Test: blockdev write zeroes read no split ...passed 00:19:39.939 Test: blockdev write zeroes read split ...passed 00:19:39.939 Test: blockdev write zeroes read split partial ...passed 00:19:39.939 Test: blockdev reset ...passed 00:19:39.939 Test: blockdev write read 8 blocks ...passed 00:19:39.939 Test: blockdev write read size > 128k ...passed 00:19:39.939 Test: blockdev write read invalid size ...passed 00:19:39.939 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:39.939 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:39.939 Test: blockdev write read max offset ...passed 00:19:39.939 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:39.939 Test: blockdev writev readv 8 blocks ...passed 00:19:39.939 Test: blockdev writev readv 30 x 1block ...passed 00:19:39.939 Test: blockdev writev readv block ...passed 00:19:39.939 Test: blockdev writev readv size > 128k ...passed 00:19:39.939 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:39.939 Test: blockdev comparev and writev ...passed 00:19:39.939 Test: blockdev nvme passthru rw ...passed 00:19:39.939 Test: blockdev nvme passthru vendor specific ...passed 00:19:39.939 Test: blockdev nvme admin passthru ...passed 00:19:39.939 Test: blockdev copy ...passed 00:19:39.939 Suite: bdevio tests on: nvme0n2 00:19:39.939 Test: blockdev write read block ...passed 00:19:39.939 Test: blockdev write zeroes read block ...passed 00:19:39.939 Test: blockdev write zeroes read no split ...passed 00:19:39.939 Test: blockdev write zeroes read split ...passed 00:19:40.198 Test: blockdev write zeroes read split partial ...passed 00:19:40.198 Test: blockdev reset ...passed 00:19:40.198 Test: blockdev write read 8 blocks ...passed 00:19:40.198 Test: blockdev write read size > 128k ...passed 00:19:40.198 Test: blockdev write read invalid size ...passed 00:19:40.198 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:40.198 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:40.198 Test: blockdev write read max offset ...passed 00:19:40.198 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:40.198 Test: blockdev writev readv 8 blocks ...passed 00:19:40.199 Test: blockdev writev readv 30 x 1block ...passed 00:19:40.199 Test: blockdev writev readv block ...passed 00:19:40.199 Test: blockdev writev readv size > 128k ...passed 00:19:40.199 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:40.199 Test: blockdev comparev and writev ...passed 00:19:40.199 Test: blockdev nvme passthru rw ...passed 00:19:40.199 Test: blockdev nvme passthru vendor specific ...passed 00:19:40.199 Test: blockdev nvme admin passthru ...passed 00:19:40.199 Test: blockdev copy ...passed 00:19:40.199 Suite: bdevio tests on: nvme0n1 00:19:40.199 Test: blockdev write read block ...passed 00:19:40.199 Test: blockdev write zeroes read block ...passed 00:19:40.199 Test: blockdev write zeroes read no split ...passed 00:19:40.199 Test: blockdev write zeroes read split ...passed 00:19:40.199 Test: blockdev write zeroes read split partial ...passed 00:19:40.199 Test: blockdev reset ...passed 00:19:40.199 Test: blockdev write read 8 blocks ...passed 00:19:40.199 Test: blockdev write read size > 128k ...passed 00:19:40.199 Test: blockdev write read invalid size ...passed 00:19:40.199 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:40.199 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:40.199 Test: blockdev write read max offset ...passed 00:19:40.199 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:40.199 Test: blockdev writev readv 8 blocks ...passed 00:19:40.199 Test: blockdev writev readv 30 x 1block ...passed 00:19:40.199 Test: blockdev writev readv block ...passed 00:19:40.199 Test: blockdev writev readv size > 128k ...passed 00:19:40.199 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:40.199 Test: blockdev comparev and writev ...passed 00:19:40.199 Test: blockdev nvme passthru rw ...passed 00:19:40.199 Test: blockdev nvme passthru vendor specific ...passed 00:19:40.199 Test: blockdev nvme admin passthru ...passed 00:19:40.199 Test: blockdev copy ...passed 00:19:40.199 00:19:40.199 Run Summary: Type Total Ran Passed Failed Inactive 00:19:40.199 suites 6 6 n/a 0 0 00:19:40.199 tests 138 138 138 0 0 00:19:40.199 asserts 780 780 780 0 n/a 00:19:40.199 00:19:40.199 Elapsed time = 1.295 seconds 00:19:40.199 0 00:19:40.199 11:27:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73974 00:19:40.199 11:27:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73974 ']' 00:19:40.199 11:27:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73974 00:19:40.199 11:27:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:40.199 11:27:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.199 11:27:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73974 00:19:40.199 11:27:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.199 11:27:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.199 11:27:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73974' 00:19:40.199 killing process with pid 73974 00:19:40.199 11:27:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73974 00:19:40.199 11:27:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73974 00:19:41.578 11:27:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:41.578 ************************************ 00:19:41.578 END TEST bdev_bounds 00:19:41.578 ************************************ 00:19:41.578 00:19:41.578 real 0m2.666s 00:19:41.578 user 0m6.659s 00:19:41.578 sys 0m0.372s 00:19:41.578 11:27:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.578 11:27:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:41.578 11:27:08 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:19:41.578 11:27:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:41.578 11:27:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.578 11:27:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:41.578 ************************************ 00:19:41.578 START TEST bdev_nbd 00:19:41.578 ************************************ 00:19:41.578 11:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:19:41.578 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:41.578 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:41.578 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:41.578 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:41.578 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:41.578 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:41.578 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:19:41.578 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:41.578 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:41.578 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:41.579 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:19:41.579 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:41.579 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:41.579 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:41.579 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:41.579 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74033 00:19:41.579 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:41.579 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:41.579 11:27:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74033 /var/tmp/spdk-nbd.sock 00:19:41.579 11:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74033 ']' 00:19:41.579 11:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:41.579 11:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:41.579 11:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:41.579 11:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.579 11:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:41.579 [2024-12-10 11:27:08.529375] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:41.579 [2024-12-10 11:27:08.529681] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.838 [2024-12-10 11:27:08.710995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.838 [2024-12-10 11:27:08.821734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.406 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.406 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:42.406 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:19:42.406 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:42.406 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:42.406 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:42.406 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:19:42.406 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:42.406 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:42.406 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:42.406 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:42.406 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:42.406 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:42.406 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:42.406 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:42.666 1+0 records in 00:19:42.666 1+0 records out 00:19:42.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005647 s, 7.3 MB/s 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:42.666 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:42.925 1+0 records in 00:19:42.925 1+0 records out 00:19:42.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528977 s, 7.7 MB/s 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:42.925 11:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:43.184 1+0 records in 00:19:43.184 1+0 records out 00:19:43.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580907 s, 7.1 MB/s 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:43.184 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:43.444 1+0 records in 00:19:43.444 1+0 records out 00:19:43.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664002 s, 6.2 MB/s 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:43.444 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:43.705 1+0 records in 00:19:43.705 1+0 records out 00:19:43.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106663 s, 3.8 MB/s 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:43.705 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:19:43.964 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:19:43.964 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:19:43.964 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:19:43.964 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:19:43.964 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:43.964 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:43.964 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:43.964 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:19:43.964 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:43.964 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:43.964 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:43.965 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:43.965 1+0 records in 00:19:43.965 1+0 records out 00:19:43.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00085501 s, 4.8 MB/s 00:19:43.965 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.965 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:43.965 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.965 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:43.965 11:27:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:43.965 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:43.965 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:43.965 11:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:44.224 { 00:19:44.224 "nbd_device": "/dev/nbd0", 00:19:44.224 "bdev_name": "nvme0n1" 00:19:44.224 }, 00:19:44.224 { 00:19:44.224 "nbd_device": "/dev/nbd1", 00:19:44.224 "bdev_name": "nvme0n2" 00:19:44.224 }, 00:19:44.224 { 00:19:44.224 "nbd_device": "/dev/nbd2", 00:19:44.224 "bdev_name": "nvme0n3" 00:19:44.224 }, 00:19:44.224 { 00:19:44.224 "nbd_device": "/dev/nbd3", 00:19:44.224 "bdev_name": "nvme1n1" 00:19:44.224 }, 00:19:44.224 { 00:19:44.224 "nbd_device": "/dev/nbd4", 00:19:44.224 "bdev_name": "nvme2n1" 00:19:44.224 }, 00:19:44.224 { 00:19:44.224 "nbd_device": "/dev/nbd5", 00:19:44.224 "bdev_name": "nvme3n1" 00:19:44.224 } 00:19:44.224 ]' 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:44.224 { 00:19:44.224 "nbd_device": "/dev/nbd0", 00:19:44.224 "bdev_name": "nvme0n1" 00:19:44.224 }, 00:19:44.224 { 00:19:44.224 "nbd_device": "/dev/nbd1", 00:19:44.224 "bdev_name": "nvme0n2" 00:19:44.224 }, 00:19:44.224 { 00:19:44.224 "nbd_device": "/dev/nbd2", 00:19:44.224 "bdev_name": "nvme0n3" 00:19:44.224 }, 00:19:44.224 { 00:19:44.224 "nbd_device": "/dev/nbd3", 00:19:44.224 "bdev_name": "nvme1n1" 00:19:44.224 }, 00:19:44.224 { 00:19:44.224 "nbd_device": "/dev/nbd4", 00:19:44.224 "bdev_name": "nvme2n1" 00:19:44.224 }, 00:19:44.224 { 00:19:44.224 "nbd_device": "/dev/nbd5", 00:19:44.224 "bdev_name": "nvme3n1" 00:19:44.224 } 00:19:44.224 ]' 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:44.224 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:44.484 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:44.484 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:44.484 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:44.484 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:44.484 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:44.484 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:44.484 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:44.484 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:44.484 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:44.484 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:19:44.743 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:19:44.743 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:19:44.743 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:19:44.743 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:44.743 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:44.743 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:19:44.743 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:44.743 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:44.743 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:44.743 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:19:45.001 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:19:45.001 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:19:45.001 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:19:45.001 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:45.001 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:45.001 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:19:45.001 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:45.001 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:45.001 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:45.001 11:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:45.261 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:45.520 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:19:45.779 /dev/nbd0 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:45.779 1+0 records in 00:19:45.779 1+0 records out 00:19:45.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575041 s, 7.1 MB/s 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:45.779 11:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:19:46.038 /dev/nbd1 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:46.038 1+0 records in 00:19:46.038 1+0 records out 00:19:46.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691236 s, 5.9 MB/s 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:46.038 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:19:46.298 /dev/nbd10 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:46.298 1+0 records in 00:19:46.298 1+0 records out 00:19:46.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060604 s, 6.8 MB/s 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:46.298 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:19:46.557 /dev/nbd11 00:19:46.557 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:19:46.557 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:19:46.557 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:46.558 1+0 records in 00:19:46.558 1+0 records out 00:19:46.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000711406 s, 5.8 MB/s 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:46.558 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:19:46.817 /dev/nbd12 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:46.817 1+0 records in 00:19:46.817 1+0 records out 00:19:46.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00299899 s, 1.4 MB/s 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:46.817 11:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:19:47.076 /dev/nbd13 00:19:47.076 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:19:47.076 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:47.077 1+0 records in 00:19:47.077 1+0 records out 00:19:47.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000934322 s, 4.4 MB/s 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:47.077 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:47.336 { 00:19:47.336 "nbd_device": "/dev/nbd0", 00:19:47.336 "bdev_name": "nvme0n1" 00:19:47.336 }, 00:19:47.336 { 00:19:47.336 "nbd_device": "/dev/nbd1", 00:19:47.336 "bdev_name": "nvme0n2" 00:19:47.336 }, 00:19:47.336 { 00:19:47.336 "nbd_device": "/dev/nbd10", 00:19:47.336 "bdev_name": "nvme0n3" 00:19:47.336 }, 00:19:47.336 { 00:19:47.336 "nbd_device": "/dev/nbd11", 00:19:47.336 "bdev_name": "nvme1n1" 00:19:47.336 }, 00:19:47.336 { 00:19:47.336 "nbd_device": "/dev/nbd12", 00:19:47.336 "bdev_name": "nvme2n1" 00:19:47.336 }, 00:19:47.336 { 00:19:47.336 "nbd_device": "/dev/nbd13", 00:19:47.336 "bdev_name": "nvme3n1" 00:19:47.336 } 00:19:47.336 ]' 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:47.336 { 00:19:47.336 "nbd_device": "/dev/nbd0", 00:19:47.336 "bdev_name": "nvme0n1" 00:19:47.336 }, 00:19:47.336 { 00:19:47.336 "nbd_device": "/dev/nbd1", 00:19:47.336 "bdev_name": "nvme0n2" 00:19:47.336 }, 00:19:47.336 { 00:19:47.336 "nbd_device": "/dev/nbd10", 00:19:47.336 "bdev_name": "nvme0n3" 00:19:47.336 }, 00:19:47.336 { 00:19:47.336 "nbd_device": "/dev/nbd11", 00:19:47.336 "bdev_name": "nvme1n1" 00:19:47.336 }, 00:19:47.336 { 00:19:47.336 "nbd_device": "/dev/nbd12", 00:19:47.336 "bdev_name": "nvme2n1" 00:19:47.336 }, 00:19:47.336 { 00:19:47.336 "nbd_device": "/dev/nbd13", 00:19:47.336 "bdev_name": "nvme3n1" 00:19:47.336 } 00:19:47.336 ]' 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:47.336 /dev/nbd1 00:19:47.336 /dev/nbd10 00:19:47.336 /dev/nbd11 00:19:47.336 /dev/nbd12 00:19:47.336 /dev/nbd13' 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:47.336 /dev/nbd1 00:19:47.336 /dev/nbd10 00:19:47.336 /dev/nbd11 00:19:47.336 /dev/nbd12 00:19:47.336 /dev/nbd13' 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:47.336 256+0 records in 00:19:47.336 256+0 records out 00:19:47.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117622 s, 89.1 MB/s 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:47.336 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:47.595 256+0 records in 00:19:47.595 256+0 records out 00:19:47.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124594 s, 8.4 MB/s 00:19:47.596 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:47.596 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:47.596 256+0 records in 00:19:47.596 256+0 records out 00:19:47.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127544 s, 8.2 MB/s 00:19:47.596 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:47.596 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:19:47.855 256+0 records in 00:19:47.855 256+0 records out 00:19:47.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129531 s, 8.1 MB/s 00:19:47.855 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:47.855 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:19:47.855 256+0 records in 00:19:47.855 256+0 records out 00:19:47.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128675 s, 8.1 MB/s 00:19:47.855 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:47.855 11:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:19:48.113 256+0 records in 00:19:48.113 256+0 records out 00:19:48.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161489 s, 6.5 MB/s 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:19:48.113 256+0 records in 00:19:48.113 256+0 records out 00:19:48.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126656 s, 8.3 MB/s 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:48.113 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:19:48.370 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:48.370 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:19:48.370 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:48.370 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:19:48.370 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:48.370 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:48.370 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:48.371 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:48.371 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:48.371 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:48.371 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:48.371 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:48.371 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:48.371 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:48.371 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:48.371 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:48.371 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:48.371 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:48.629 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:48.629 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:48.629 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:48.629 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:48.629 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:48.629 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:48.629 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:48.629 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:48.629 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:48.629 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:48.629 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:48.629 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:48.629 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:48.629 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:19:48.888 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:19:48.888 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:19:48.888 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:19:48.888 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:48.888 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:48.888 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:19:48.888 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:48.888 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:48.888 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:48.888 11:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:19:49.147 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:19:49.147 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:19:49.147 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:19:49.147 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.147 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.147 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:19:49.147 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:49.147 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.147 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.147 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:19:49.407 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:19:49.407 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:19:49.407 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:19:49.407 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.407 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.407 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:19:49.407 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:49.407 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.407 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.407 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:49.666 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:49.925 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:49.925 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:49.925 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:49.925 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:49.925 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:49.925 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:49.925 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:49.925 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:49.925 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:49.925 11:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:49.925 malloc_lvol_verify 00:19:49.925 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:50.184 6b6dc226-5b8c-4fd0-8b94-a968bccbb9b5 00:19:50.184 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:50.444 d66abdcd-b304-48b6-bd70-38ce19a91944 00:19:50.444 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:50.704 /dev/nbd0 00:19:50.704 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:50.704 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:50.704 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:50.704 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:50.704 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:50.704 mke2fs 1.47.0 (5-Feb-2023) 00:19:50.704 Discarding device blocks: 0/4096 done 00:19:50.704 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:50.704 00:19:50.704 Allocating group tables: 0/1 done 00:19:50.704 Writing inode tables: 0/1 done 00:19:50.704 Creating journal (1024 blocks): done 00:19:50.704 Writing superblocks and filesystem accounting information: 0/1 done 00:19:50.704 00:19:50.704 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:50.704 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.704 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:50.704 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:50.704 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:50.704 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:50.704 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74033 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74033 ']' 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74033 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74033 00:19:50.964 killing process with pid 74033 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74033' 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74033 00:19:50.964 11:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74033 00:19:52.345 ************************************ 00:19:52.345 END TEST bdev_nbd 00:19:52.345 ************************************ 00:19:52.345 11:27:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:52.345 00:19:52.345 real 0m10.648s 00:19:52.345 user 0m13.614s 00:19:52.345 sys 0m4.594s 00:19:52.345 11:27:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.345 11:27:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:52.345 11:27:19 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:52.345 11:27:19 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:19:52.345 11:27:19 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:19:52.345 11:27:19 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:52.345 11:27:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:52.345 11:27:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.345 11:27:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:52.345 ************************************ 00:19:52.345 START TEST bdev_fio 00:19:52.345 ************************************ 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:52.345 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:19:52.345 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:52.346 ************************************ 00:19:52.346 START TEST bdev_fio_rw_verify 00:19:52.346 ************************************ 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:52.346 11:27:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:52.606 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:52.606 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:52.606 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:52.606 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:52.606 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:52.606 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:52.606 fio-3.35 00:19:52.606 Starting 6 threads 00:20:04.826 00:20:04.826 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74444: Tue Dec 10 11:27:30 2024 00:20:04.826 read: IOPS=32.5k, BW=127MiB/s (133MB/s)(1271MiB/10001msec) 00:20:04.826 slat (usec): min=2, max=734, avg= 7.23, stdev= 5.89 00:20:04.826 clat (usec): min=79, max=59087, avg=569.27, stdev=264.89 00:20:04.826 lat (usec): min=84, max=59122, avg=576.50, stdev=265.88 00:20:04.826 clat percentiles (usec): 00:20:04.826 | 50.000th=[ 578], 99.000th=[ 1156], 99.900th=[ 1647], 99.990th=[ 3523], 00:20:04.826 | 99.999th=[19530] 00:20:04.826 write: IOPS=32.9k, BW=129MiB/s (135MB/s)(1286MiB/10001msec); 0 zone resets 00:20:04.826 slat (usec): min=10, max=2299, avg=23.77, stdev=31.69 00:20:04.826 clat (usec): min=81, max=3490, avg=663.18, stdev=249.97 00:20:04.826 lat (usec): min=95, max=3542, avg=686.95, stdev=254.21 00:20:04.826 clat percentiles (usec): 00:20:04.826 | 50.000th=[ 660], 99.000th=[ 1434], 99.900th=[ 2089], 99.990th=[ 2769], 00:20:04.826 | 99.999th=[ 3359] 00:20:04.826 bw ( KiB/s): min=103405, max=151442, per=99.83%, avg=131450.21, stdev=2229.01, samples=114 00:20:04.826 iops : min=25849, max=37860, avg=32861.84, stdev=557.28, samples=114 00:20:04.826 lat (usec) : 100=0.01%, 250=6.01%, 500=25.19%, 750=42.72%, 1000=21.16% 00:20:04.826 lat (msec) : 2=4.83%, 4=0.09%, 20=0.01%, 50=0.01%, 100=0.01% 00:20:04.826 cpu : usr=57.72%, sys=28.45%, ctx=7809, majf=0, minf=27143 00:20:04.826 IO depths : 1=11.9%, 2=24.3%, 4=50.7%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:04.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.826 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.826 issued rwts: total=325437,329234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:04.826 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:04.826 00:20:04.826 Run status group 0 (all jobs): 00:20:04.826 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=1271MiB (1333MB), run=10001-10001msec 00:20:04.826 WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=1286MiB (1349MB), run=10001-10001msec 00:20:04.826 ----------------------------------------------------- 00:20:04.826 Suppressions used: 00:20:04.826 count bytes template 00:20:04.826 6 48 /usr/src/fio/parse.c 00:20:04.826 3572 342912 /usr/src/fio/iolog.c 00:20:04.826 1 8 libtcmalloc_minimal.so 00:20:04.826 1 904 libcrypto.so 00:20:04.826 ----------------------------------------------------- 00:20:04.826 00:20:04.826 00:20:04.826 real 0m12.591s 00:20:04.826 user 0m36.715s 00:20:04.826 sys 0m17.509s 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:04.826 ************************************ 00:20:04.826 END TEST bdev_fio_rw_verify 00:20:04.826 ************************************ 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:04.826 11:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:05.086 11:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "31fd2e10-e7d9-4c57-8013-d5bdf3bf98a1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "31fd2e10-e7d9-4c57-8013-d5bdf3bf98a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "1f249c5d-1afe-4331-bcae-cba174200989"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1f249c5d-1afe-4331-bcae-cba174200989",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "c029285e-bb5c-4387-a1c3-508c5fa5c54e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c029285e-bb5c-4387-a1c3-508c5fa5c54e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b9721ce4-b1fb-4f78-97aa-441939babf26"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b9721ce4-b1fb-4f78-97aa-441939babf26",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "23b4e6e1-a8c6-414f-9a5e-6ad1e09c5837"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "23b4e6e1-a8c6-414f-9a5e-6ad1e09c5837",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "62258215-fadc-45eb-ab37-edc22ee2bfe1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "62258215-fadc-45eb-ab37-edc22ee2bfe1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:20:05.086 11:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:05.086 11:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:05.086 /home/vagrant/spdk_repo/spdk 00:20:05.086 11:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:05.086 11:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:05.087 11:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:05.087 00:20:05.087 real 0m12.831s 00:20:05.087 user 0m36.829s 00:20:05.087 sys 0m17.640s 00:20:05.087 11:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.087 11:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:05.087 ************************************ 00:20:05.087 END TEST bdev_fio 00:20:05.087 ************************************ 00:20:05.087 11:27:32 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:05.087 11:27:32 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:05.087 11:27:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:05.087 11:27:32 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.087 11:27:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:05.087 ************************************ 00:20:05.087 START TEST bdev_verify 00:20:05.087 ************************************ 00:20:05.087 11:27:32 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:05.087 [2024-12-10 11:27:32.184876] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:20:05.087 [2024-12-10 11:27:32.185062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74613 ] 00:20:05.346 [2024-12-10 11:27:32.376113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:05.605 [2024-12-10 11:27:32.522733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.605 [2024-12-10 11:27:32.522767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.174 Running I/O for 5 seconds... 00:20:08.122 20704.00 IOPS, 80.88 MiB/s [2024-12-10T11:27:36.616Z] 22608.00 IOPS, 88.31 MiB/s [2024-12-10T11:27:37.552Z] 22986.33 IOPS, 89.79 MiB/s [2024-12-10T11:27:38.543Z] 23471.75 IOPS, 91.69 MiB/s [2024-12-10T11:27:38.543Z] 23417.40 IOPS, 91.47 MiB/s 00:20:11.429 Latency(us) 00:20:11.429 [2024-12-10T11:27:38.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.429 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:11.429 Verification LBA range: start 0x0 length 0x80000 00:20:11.429 nvme0n1 : 5.06 1819.83 7.11 0.00 0.00 70231.33 15265.41 77485.13 00:20:11.429 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:11.429 Verification LBA range: start 0x80000 length 0x80000 00:20:11.429 nvme0n1 : 5.05 1724.00 6.73 0.00 0.00 74122.18 10422.59 77906.25 00:20:11.429 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:11.429 Verification LBA range: start 0x0 length 0x80000 00:20:11.429 nvme0n2 : 5.04 1827.43 7.14 0.00 0.00 69843.19 11843.86 73695.10 00:20:11.429 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:11.429 Verification LBA range: start 0x80000 length 0x80000 00:20:11.429 nvme0n2 : 5.06 1721.82 6.73 0.00 0.00 74093.03 13686.23 88013.01 00:20:11.429 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:11.429 Verification LBA range: start 0x0 length 0x80000 00:20:11.429 nvme0n3 : 5.09 1835.01 7.17 0.00 0.00 69471.67 13370.40 70747.30 00:20:11.429 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:11.429 Verification LBA range: start 0x80000 length 0x80000 00:20:11.429 nvme0n3 : 5.06 1718.53 6.71 0.00 0.00 74121.34 14528.46 85065.20 00:20:11.429 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:11.429 Verification LBA range: start 0x0 length 0x20000 00:20:11.429 nvme1n1 : 5.05 1825.56 7.13 0.00 0.00 69718.11 10527.87 70326.18 00:20:11.429 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:11.429 Verification LBA range: start 0x20000 length 0x20000 00:20:11.429 nvme1n1 : 5.07 1717.42 6.71 0.00 0.00 74076.31 10685.79 80854.05 00:20:11.429 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:11.429 Verification LBA range: start 0x0 length 0xbd0bd 00:20:11.429 nvme2n1 : 5.10 2681.57 10.47 0.00 0.00 47300.42 6316.72 59377.20 00:20:11.429 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:11.429 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:20:11.429 nvme2n1 : 5.08 2746.79 10.73 0.00 0.00 46148.91 5606.09 56008.28 00:20:11.429 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:11.429 Verification LBA range: start 0x0 length 0xa0000 00:20:11.429 nvme3n1 : 5.08 1839.68 7.19 0.00 0.00 68855.99 7001.03 74116.22 00:20:11.429 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:11.429 Verification LBA range: start 0xa0000 length 0xa0000 00:20:11.429 nvme3n1 : 5.07 1716.78 6.71 0.00 0.00 73726.91 8264.38 77064.02 00:20:11.429 [2024-12-10T11:27:38.543Z] =================================================================================================================== 00:20:11.429 [2024-12-10T11:27:38.543Z] Total : 23174.42 90.53 0.00 0.00 65870.00 5606.09 88013.01 00:20:12.367 00:20:12.368 real 0m7.405s 00:20:12.368 user 0m11.172s 00:20:12.368 sys 0m2.218s 00:20:12.368 11:27:39 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.368 11:27:39 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:12.368 ************************************ 00:20:12.368 END TEST bdev_verify 00:20:12.368 ************************************ 00:20:12.627 11:27:39 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:12.627 11:27:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:12.627 11:27:39 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.627 11:27:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:12.627 ************************************ 00:20:12.627 START TEST bdev_verify_big_io 00:20:12.627 ************************************ 00:20:12.627 11:27:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:12.627 [2024-12-10 11:27:39.637887] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:20:12.627 [2024-12-10 11:27:39.638033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74717 ] 00:20:12.886 [2024-12-10 11:27:39.820245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:12.886 [2024-12-10 11:27:39.962385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.886 [2024-12-10 11:27:39.962409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.824 Running I/O for 5 seconds... 00:20:18.898 1672.00 IOPS, 104.50 MiB/s [2024-12-10T11:27:46.581Z] 3368.00 IOPS, 210.50 MiB/s [2024-12-10T11:27:46.840Z] 3877.00 IOPS, 242.31 MiB/s 00:20:19.726 Latency(us) 00:20:19.726 [2024-12-10T11:27:46.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.726 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:19.726 Verification LBA range: start 0x0 length 0x8000 00:20:19.726 nvme0n1 : 5.60 189.96 11.87 0.00 0.00 666948.38 16423.48 781589.18 00:20:19.726 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:19.726 Verification LBA range: start 0x8000 length 0x8000 00:20:19.726 nvme0n1 : 5.48 151.89 9.49 0.00 0.00 802438.97 5184.98 1185859.44 00:20:19.726 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:19.726 Verification LBA range: start 0x0 length 0x8000 00:20:19.726 nvme0n2 : 5.59 206.07 12.88 0.00 0.00 588355.33 10317.31 640094.59 00:20:19.726 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:19.726 Verification LBA range: start 0x8000 length 0x8000 00:20:19.726 nvme0n2 : 5.62 133.72 8.36 0.00 0.00 874775.68 90960.81 1185859.44 00:20:19.726 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:19.726 Verification LBA range: start 0x0 length 0x8000 00:20:19.726 nvme0n3 : 5.61 194.02 12.13 0.00 0.00 633534.07 13423.04 710841.88 00:20:19.726 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:19.726 Verification LBA range: start 0x8000 length 0x8000 00:20:19.726 nvme0n3 : 5.76 163.90 10.24 0.00 0.00 697558.70 60640.54 1286927.01 00:20:19.726 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:19.726 Verification LBA range: start 0x0 length 0x2000 00:20:19.726 nvme1n1 : 5.61 189.15 11.82 0.00 0.00 639748.14 13686.23 1266713.50 00:20:19.726 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:19.727 Verification LBA range: start 0x2000 length 0x2000 00:20:19.727 nvme1n1 : 5.79 152.03 9.50 0.00 0.00 725363.89 70326.18 1401470.25 00:20:19.727 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:19.727 Verification LBA range: start 0x0 length 0xbd0b 00:20:19.727 nvme2n1 : 5.61 285.46 17.84 0.00 0.00 417109.27 15370.69 596298.64 00:20:19.727 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:19.727 Verification LBA range: start 0xbd0b length 0xbd0b 00:20:19.727 nvme2n1 : 5.91 238.22 14.89 0.00 0.00 448879.58 20108.23 1509275.66 00:20:19.727 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:19.727 Verification LBA range: start 0x0 length 0xa000 00:20:19.727 nvme3n1 : 5.61 160.05 10.00 0.00 0.00 731332.54 9738.28 1751837.82 00:20:19.727 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:19.727 Verification LBA range: start 0xa000 length 0xa000 00:20:19.727 nvme3n1 : 6.07 300.32 18.77 0.00 0.00 346612.86 486.91 1064578.36 00:20:19.727 [2024-12-10T11:27:46.841Z] =================================================================================================================== 00:20:19.727 [2024-12-10T11:27:46.841Z] Total : 2364.80 147.80 0.00 0.00 589950.99 486.91 1751837.82 00:20:21.107 00:20:21.107 real 0m8.527s 00:20:21.107 user 0m15.363s 00:20:21.107 sys 0m0.707s 00:20:21.107 11:27:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.107 11:27:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:21.107 ************************************ 00:20:21.107 END TEST bdev_verify_big_io 00:20:21.107 ************************************ 00:20:21.107 11:27:48 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:21.107 11:27:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:21.107 11:27:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.107 11:27:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:21.107 ************************************ 00:20:21.107 START TEST bdev_write_zeroes 00:20:21.107 ************************************ 00:20:21.107 11:27:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:21.366 [2024-12-10 11:27:48.247894] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:20:21.366 [2024-12-10 11:27:48.248054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74834 ] 00:20:21.366 [2024-12-10 11:27:48.423040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.625 [2024-12-10 11:27:48.526781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.884 Running I/O for 1 seconds... 00:20:23.264 42272.00 IOPS, 165.12 MiB/s 00:20:23.264 Latency(us) 00:20:23.264 [2024-12-10T11:27:50.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.264 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:23.264 nvme0n1 : 1.03 6486.84 25.34 0.00 0.00 19714.57 9633.00 31583.61 00:20:23.264 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:23.264 nvme0n2 : 1.03 6479.59 25.31 0.00 0.00 19724.74 9685.64 31794.17 00:20:23.264 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:23.264 nvme0n3 : 1.03 6472.31 25.28 0.00 0.00 19734.15 9685.64 31794.17 00:20:23.264 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:23.264 nvme1n1 : 1.03 6465.38 25.26 0.00 0.00 19742.33 9633.00 31373.06 00:20:23.264 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:23.264 nvme2n1 : 1.02 10151.16 39.65 0.00 0.00 12562.73 5316.58 24529.94 00:20:23.264 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:23.264 nvme3n1 : 1.03 6458.28 25.23 0.00 0.00 19620.28 3079.40 30530.83 00:20:23.264 [2024-12-10T11:27:50.378Z] =================================================================================================================== 00:20:23.264 [2024-12-10T11:27:50.378Z] Total : 42513.56 166.07 0.00 0.00 18006.15 3079.40 31794.17 00:20:24.201 00:20:24.201 real 0m2.953s 00:20:24.201 user 0m2.217s 00:20:24.201 sys 0m0.538s 00:20:24.201 11:27:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.201 11:27:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:24.201 ************************************ 00:20:24.201 END TEST bdev_write_zeroes 00:20:24.201 ************************************ 00:20:24.201 11:27:51 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:24.202 11:27:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:24.202 11:27:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:24.202 11:27:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:24.202 ************************************ 00:20:24.202 START TEST bdev_json_nonenclosed 00:20:24.202 ************************************ 00:20:24.202 11:27:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:24.202 [2024-12-10 11:27:51.270223] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:20:24.202 [2024-12-10 11:27:51.270352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74894 ] 00:20:24.461 [2024-12-10 11:27:51.446508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.461 [2024-12-10 11:27:51.553041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.461 [2024-12-10 11:27:51.553147] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:24.461 [2024-12-10 11:27:51.553169] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:24.461 [2024-12-10 11:27:51.553181] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:24.720 00:20:24.720 real 0m0.618s 00:20:24.720 user 0m0.378s 00:20:24.720 sys 0m0.136s 00:20:24.720 11:27:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.720 11:27:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:24.720 ************************************ 00:20:24.720 END TEST bdev_json_nonenclosed 00:20:24.720 ************************************ 00:20:24.979 11:27:51 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:24.979 11:27:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:24.979 11:27:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:24.979 11:27:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:24.979 ************************************ 00:20:24.979 START TEST bdev_json_nonarray 00:20:24.979 ************************************ 00:20:24.979 11:27:51 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:24.979 [2024-12-10 11:27:51.964550] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:20:24.979 [2024-12-10 11:27:51.964694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74919 ] 00:20:25.238 [2024-12-10 11:27:52.144267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.238 [2024-12-10 11:27:52.249401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.238 [2024-12-10 11:27:52.249518] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:25.238 [2024-12-10 11:27:52.249540] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:25.238 [2024-12-10 11:27:52.249553] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:25.497 00:20:25.497 real 0m0.614s 00:20:25.497 user 0m0.370s 00:20:25.497 sys 0m0.138s 00:20:25.497 11:27:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.497 11:27:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:25.497 ************************************ 00:20:25.497 END TEST bdev_json_nonarray 00:20:25.497 ************************************ 00:20:25.497 11:27:52 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:20:25.497 11:27:52 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:20:25.497 11:27:52 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:20:25.497 11:27:52 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:25.497 11:27:52 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:20:25.497 11:27:52 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:25.497 11:27:52 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:25.497 11:27:52 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:20:25.497 11:27:52 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:20:25.497 11:27:52 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:20:25.497 11:27:52 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:20:25.497 11:27:52 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:26.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:27.846 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:27.846 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:27.846 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:27.846 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:28.109 00:20:28.109 real 0m56.853s 00:20:28.109 user 1m34.975s 00:20:28.109 sys 0m30.562s 00:20:28.109 11:27:55 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.109 11:27:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:28.109 ************************************ 00:20:28.109 END TEST blockdev_xnvme 00:20:28.109 ************************************ 00:20:28.109 11:27:55 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:28.109 11:27:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:28.109 11:27:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.109 11:27:55 -- common/autotest_common.sh@10 -- # set +x 00:20:28.109 ************************************ 00:20:28.109 START TEST ublk 00:20:28.109 ************************************ 00:20:28.109 11:27:55 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:28.109 * Looking for test storage... 00:20:28.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:28.369 11:27:55 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:28.369 11:27:55 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:20:28.369 11:27:55 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:28.369 11:27:55 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:28.369 11:27:55 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.369 11:27:55 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.369 11:27:55 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.369 11:27:55 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.369 11:27:55 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.369 11:27:55 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.369 11:27:55 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.369 11:27:55 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.369 11:27:55 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.369 11:27:55 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.369 11:27:55 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.369 11:27:55 ublk -- scripts/common.sh@344 -- # case "$op" in 00:20:28.369 11:27:55 ublk -- scripts/common.sh@345 -- # : 1 00:20:28.369 11:27:55 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.369 11:27:55 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.369 11:27:55 ublk -- scripts/common.sh@365 -- # decimal 1 00:20:28.369 11:27:55 ublk -- scripts/common.sh@353 -- # local d=1 00:20:28.369 11:27:55 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.369 11:27:55 ublk -- scripts/common.sh@355 -- # echo 1 00:20:28.369 11:27:55 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.369 11:27:55 ublk -- scripts/common.sh@366 -- # decimal 2 00:20:28.369 11:27:55 ublk -- scripts/common.sh@353 -- # local d=2 00:20:28.369 11:27:55 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.369 11:27:55 ublk -- scripts/common.sh@355 -- # echo 2 00:20:28.369 11:27:55 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.369 11:27:55 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.369 11:27:55 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.369 11:27:55 ublk -- scripts/common.sh@368 -- # return 0 00:20:28.369 11:27:55 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.369 11:27:55 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:28.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.369 --rc genhtml_branch_coverage=1 00:20:28.369 --rc genhtml_function_coverage=1 00:20:28.369 --rc genhtml_legend=1 00:20:28.369 --rc geninfo_all_blocks=1 00:20:28.369 --rc geninfo_unexecuted_blocks=1 00:20:28.369 00:20:28.369 ' 00:20:28.369 11:27:55 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:28.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.369 --rc genhtml_branch_coverage=1 00:20:28.369 --rc genhtml_function_coverage=1 00:20:28.369 --rc genhtml_legend=1 00:20:28.369 --rc geninfo_all_blocks=1 00:20:28.369 --rc geninfo_unexecuted_blocks=1 00:20:28.369 00:20:28.369 ' 00:20:28.369 11:27:55 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:28.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.369 --rc genhtml_branch_coverage=1 00:20:28.369 --rc genhtml_function_coverage=1 00:20:28.369 --rc genhtml_legend=1 00:20:28.369 --rc geninfo_all_blocks=1 00:20:28.369 --rc geninfo_unexecuted_blocks=1 00:20:28.369 00:20:28.369 ' 00:20:28.369 11:27:55 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:28.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.369 --rc genhtml_branch_coverage=1 00:20:28.369 --rc genhtml_function_coverage=1 00:20:28.369 --rc genhtml_legend=1 00:20:28.369 --rc geninfo_all_blocks=1 00:20:28.369 --rc geninfo_unexecuted_blocks=1 00:20:28.369 00:20:28.369 ' 00:20:28.369 11:27:55 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:28.369 11:27:55 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:28.369 11:27:55 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:28.369 11:27:55 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:28.369 11:27:55 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:28.369 11:27:55 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:28.369 11:27:55 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:28.369 11:27:55 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:28.369 11:27:55 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:28.370 11:27:55 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:20:28.370 11:27:55 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:20:28.370 11:27:55 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:20:28.370 11:27:55 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:20:28.370 11:27:55 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:20:28.370 11:27:55 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:20:28.370 11:27:55 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:20:28.370 11:27:55 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:20:28.370 11:27:55 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:20:28.370 11:27:55 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:20:28.370 11:27:55 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:20:28.370 11:27:55 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:28.370 11:27:55 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.370 11:27:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:28.370 ************************************ 00:20:28.370 START TEST test_save_ublk_config 00:20:28.370 ************************************ 00:20:28.370 11:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:20:28.370 11:27:55 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:20:28.370 11:27:55 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75210 00:20:28.370 11:27:55 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:20:28.370 11:27:55 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:20:28.370 11:27:55 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75210 00:20:28.370 11:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75210 ']' 00:20:28.370 11:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.370 11:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.370 11:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.370 11:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.370 11:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:28.370 [2024-12-10 11:27:55.469666] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:20:28.370 [2024-12-10 11:27:55.469802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75210 ] 00:20:28.627 [2024-12-10 11:27:55.651187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.884 [2024-12-10 11:27:55.758433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.817 11:27:56 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.817 11:27:56 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:20:29.817 11:27:56 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:20:29.817 11:27:56 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:20:29.817 11:27:56 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.817 11:27:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:29.817 [2024-12-10 11:27:56.753974] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:29.817 [2024-12-10 11:27:56.755200] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:29.817 malloc0 00:20:29.817 [2024-12-10 11:27:56.850096] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:29.817 [2024-12-10 11:27:56.850200] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:29.817 [2024-12-10 11:27:56.850215] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:29.817 [2024-12-10 11:27:56.850225] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:29.817 [2024-12-10 11:27:56.859091] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:29.817 [2024-12-10 11:27:56.859119] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:29.817 [2024-12-10 11:27:56.865967] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:29.817 [2024-12-10 11:27:56.866079] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:29.817 [2024-12-10 11:27:56.882957] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:29.817 0 00:20:29.817 11:27:56 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.817 11:27:56 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:20:29.817 11:27:56 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.817 11:27:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:30.122 11:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.122 11:27:57 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:20:30.122 "subsystems": [ 00:20:30.122 { 00:20:30.122 "subsystem": "fsdev", 00:20:30.122 "config": [ 00:20:30.122 { 00:20:30.122 "method": "fsdev_set_opts", 00:20:30.122 "params": { 00:20:30.122 "fsdev_io_pool_size": 65535, 00:20:30.122 "fsdev_io_cache_size": 256 00:20:30.122 } 00:20:30.122 } 00:20:30.122 ] 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "subsystem": "keyring", 00:20:30.122 "config": [] 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "subsystem": "iobuf", 00:20:30.122 "config": [ 00:20:30.122 { 00:20:30.122 "method": "iobuf_set_options", 00:20:30.122 "params": { 00:20:30.122 "small_pool_count": 8192, 00:20:30.122 "large_pool_count": 1024, 00:20:30.122 "small_bufsize": 8192, 00:20:30.122 "large_bufsize": 135168, 00:20:30.122 "enable_numa": false 00:20:30.122 } 00:20:30.122 } 00:20:30.122 ] 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "subsystem": "sock", 00:20:30.122 "config": [ 00:20:30.122 { 00:20:30.122 "method": "sock_set_default_impl", 00:20:30.122 "params": { 00:20:30.122 "impl_name": "posix" 00:20:30.122 } 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "method": "sock_impl_set_options", 00:20:30.122 "params": { 00:20:30.122 "impl_name": "ssl", 00:20:30.122 "recv_buf_size": 4096, 00:20:30.122 "send_buf_size": 4096, 00:20:30.122 "enable_recv_pipe": true, 00:20:30.122 "enable_quickack": false, 00:20:30.122 "enable_placement_id": 0, 00:20:30.122 "enable_zerocopy_send_server": true, 00:20:30.122 "enable_zerocopy_send_client": false, 00:20:30.122 "zerocopy_threshold": 0, 00:20:30.122 "tls_version": 0, 00:20:30.122 "enable_ktls": false 00:20:30.122 } 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "method": "sock_impl_set_options", 00:20:30.122 "params": { 00:20:30.122 "impl_name": "posix", 00:20:30.122 "recv_buf_size": 2097152, 00:20:30.122 "send_buf_size": 2097152, 00:20:30.122 "enable_recv_pipe": true, 00:20:30.122 "enable_quickack": false, 00:20:30.122 "enable_placement_id": 0, 00:20:30.122 "enable_zerocopy_send_server": true, 00:20:30.122 "enable_zerocopy_send_client": false, 00:20:30.122 "zerocopy_threshold": 0, 00:20:30.122 "tls_version": 0, 00:20:30.122 "enable_ktls": false 00:20:30.122 } 00:20:30.122 } 00:20:30.122 ] 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "subsystem": "vmd", 00:20:30.122 "config": [] 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "subsystem": "accel", 00:20:30.122 "config": [ 00:20:30.122 { 00:20:30.122 "method": "accel_set_options", 00:20:30.122 "params": { 00:20:30.122 "small_cache_size": 128, 00:20:30.122 "large_cache_size": 16, 00:20:30.122 "task_count": 2048, 00:20:30.122 "sequence_count": 2048, 00:20:30.122 "buf_count": 2048 00:20:30.122 } 00:20:30.122 } 00:20:30.122 ] 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "subsystem": "bdev", 00:20:30.122 "config": [ 00:20:30.122 { 00:20:30.122 "method": "bdev_set_options", 00:20:30.122 "params": { 00:20:30.122 "bdev_io_pool_size": 65535, 00:20:30.122 "bdev_io_cache_size": 256, 00:20:30.122 "bdev_auto_examine": true, 00:20:30.122 "iobuf_small_cache_size": 128, 00:20:30.122 "iobuf_large_cache_size": 16 00:20:30.122 } 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "method": "bdev_raid_set_options", 00:20:30.122 "params": { 00:20:30.122 "process_window_size_kb": 1024, 00:20:30.122 "process_max_bandwidth_mb_sec": 0 00:20:30.122 } 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "method": "bdev_iscsi_set_options", 00:20:30.122 "params": { 00:20:30.122 "timeout_sec": 30 00:20:30.122 } 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "method": "bdev_nvme_set_options", 00:20:30.122 "params": { 00:20:30.122 "action_on_timeout": "none", 00:20:30.122 "timeout_us": 0, 00:20:30.122 "timeout_admin_us": 0, 00:20:30.122 "keep_alive_timeout_ms": 10000, 00:20:30.122 "arbitration_burst": 0, 00:20:30.122 "low_priority_weight": 0, 00:20:30.122 "medium_priority_weight": 0, 00:20:30.122 "high_priority_weight": 0, 00:20:30.122 "nvme_adminq_poll_period_us": 10000, 00:20:30.122 "nvme_ioq_poll_period_us": 0, 00:20:30.122 "io_queue_requests": 0, 00:20:30.122 "delay_cmd_submit": true, 00:20:30.122 "transport_retry_count": 4, 00:20:30.122 "bdev_retry_count": 3, 00:20:30.122 "transport_ack_timeout": 0, 00:20:30.122 "ctrlr_loss_timeout_sec": 0, 00:20:30.122 "reconnect_delay_sec": 0, 00:20:30.122 "fast_io_fail_timeout_sec": 0, 00:20:30.122 "disable_auto_failback": false, 00:20:30.122 "generate_uuids": false, 00:20:30.122 "transport_tos": 0, 00:20:30.122 "nvme_error_stat": false, 00:20:30.122 "rdma_srq_size": 0, 00:20:30.122 "io_path_stat": false, 00:20:30.122 "allow_accel_sequence": false, 00:20:30.122 "rdma_max_cq_size": 0, 00:20:30.122 "rdma_cm_event_timeout_ms": 0, 00:20:30.122 "dhchap_digests": [ 00:20:30.122 "sha256", 00:20:30.122 "sha384", 00:20:30.122 "sha512" 00:20:30.122 ], 00:20:30.122 "dhchap_dhgroups": [ 00:20:30.122 "null", 00:20:30.122 "ffdhe2048", 00:20:30.122 "ffdhe3072", 00:20:30.122 "ffdhe4096", 00:20:30.122 "ffdhe6144", 00:20:30.122 "ffdhe8192" 00:20:30.122 ] 00:20:30.122 } 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "method": "bdev_nvme_set_hotplug", 00:20:30.122 "params": { 00:20:30.122 "period_us": 100000, 00:20:30.122 "enable": false 00:20:30.122 } 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "method": "bdev_malloc_create", 00:20:30.122 "params": { 00:20:30.122 "name": "malloc0", 00:20:30.122 "num_blocks": 8192, 00:20:30.122 "block_size": 4096, 00:20:30.122 "physical_block_size": 4096, 00:20:30.122 "uuid": "bb245ac1-67bf-4977-9a80-9a17e46a7dc4", 00:20:30.122 "optimal_io_boundary": 0, 00:20:30.122 "md_size": 0, 00:20:30.122 "dif_type": 0, 00:20:30.122 "dif_is_head_of_md": false, 00:20:30.122 "dif_pi_format": 0 00:20:30.122 } 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "method": "bdev_wait_for_examine" 00:20:30.122 } 00:20:30.122 ] 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "subsystem": "scsi", 00:20:30.122 "config": null 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "subsystem": "scheduler", 00:20:30.122 "config": [ 00:20:30.122 { 00:20:30.122 "method": "framework_set_scheduler", 00:20:30.122 "params": { 00:20:30.122 "name": "static" 00:20:30.122 } 00:20:30.122 } 00:20:30.122 ] 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "subsystem": "vhost_scsi", 00:20:30.122 "config": [] 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "subsystem": "vhost_blk", 00:20:30.122 "config": [] 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "subsystem": "ublk", 00:20:30.122 "config": [ 00:20:30.122 { 00:20:30.122 "method": "ublk_create_target", 00:20:30.122 "params": { 00:20:30.122 "cpumask": "1" 00:20:30.122 } 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "method": "ublk_start_disk", 00:20:30.122 "params": { 00:20:30.122 "bdev_name": "malloc0", 00:20:30.122 "ublk_id": 0, 00:20:30.122 "num_queues": 1, 00:20:30.122 "queue_depth": 128 00:20:30.122 } 00:20:30.122 } 00:20:30.122 ] 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "subsystem": "nbd", 00:20:30.122 "config": [] 00:20:30.122 }, 00:20:30.122 { 00:20:30.122 "subsystem": "nvmf", 00:20:30.122 "config": [ 00:20:30.122 { 00:20:30.122 "method": "nvmf_set_config", 00:20:30.122 "params": { 00:20:30.123 "discovery_filter": "match_any", 00:20:30.123 "admin_cmd_passthru": { 00:20:30.123 "identify_ctrlr": false 00:20:30.123 }, 00:20:30.123 "dhchap_digests": [ 00:20:30.123 "sha256", 00:20:30.123 "sha384", 00:20:30.123 "sha512" 00:20:30.123 ], 00:20:30.123 "dhchap_dhgroups": [ 00:20:30.123 "null", 00:20:30.123 "ffdhe2048", 00:20:30.123 "ffdhe3072", 00:20:30.123 "ffdhe4096", 00:20:30.123 "ffdhe6144", 00:20:30.123 "ffdhe8192" 00:20:30.123 ] 00:20:30.123 } 00:20:30.123 }, 00:20:30.123 { 00:20:30.123 "method": "nvmf_set_max_subsystems", 00:20:30.123 "params": { 00:20:30.123 "max_subsystems": 1024 00:20:30.123 } 00:20:30.123 }, 00:20:30.123 { 00:20:30.123 "method": "nvmf_set_crdt", 00:20:30.123 "params": { 00:20:30.123 "crdt1": 0, 00:20:30.123 "crdt2": 0, 00:20:30.123 "crdt3": 0 00:20:30.123 } 00:20:30.123 } 00:20:30.123 ] 00:20:30.123 }, 00:20:30.123 { 00:20:30.123 "subsystem": "iscsi", 00:20:30.123 "config": [ 00:20:30.123 { 00:20:30.123 "method": "iscsi_set_options", 00:20:30.123 "params": { 00:20:30.123 "node_base": "iqn.2016-06.io.spdk", 00:20:30.123 "max_sessions": 128, 00:20:30.123 "max_connections_per_session": 2, 00:20:30.123 "max_queue_depth": 64, 00:20:30.123 "default_time2wait": 2, 00:20:30.123 "default_time2retain": 20, 00:20:30.123 "first_burst_length": 8192, 00:20:30.123 "immediate_data": true, 00:20:30.123 "allow_duplicated_isid": false, 00:20:30.123 "error_recovery_level": 0, 00:20:30.123 "nop_timeout": 60, 00:20:30.123 "nop_in_interval": 30, 00:20:30.123 "disable_chap": false, 00:20:30.123 "require_chap": false, 00:20:30.123 "mutual_chap": false, 00:20:30.123 "chap_group": 0, 00:20:30.123 "max_large_datain_per_connection": 64, 00:20:30.123 "max_r2t_per_connection": 4, 00:20:30.123 "pdu_pool_size": 36864, 00:20:30.123 "immediate_data_pool_size": 16384, 00:20:30.123 "data_out_pool_size": 2048 00:20:30.123 } 00:20:30.123 } 00:20:30.123 ] 00:20:30.123 } 00:20:30.123 ] 00:20:30.123 }' 00:20:30.123 11:27:57 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75210 00:20:30.123 11:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75210 ']' 00:20:30.123 11:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75210 00:20:30.123 11:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:20:30.123 11:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.123 11:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75210 00:20:30.382 11:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.382 killing process with pid 75210 00:20:30.382 11:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.382 11:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75210' 00:20:30.382 11:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75210 00:20:30.382 11:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75210 00:20:31.761 [2024-12-10 11:27:58.727756] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:31.761 [2024-12-10 11:27:58.773968] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:31.761 [2024-12-10 11:27:58.774087] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:31.761 [2024-12-10 11:27:58.781957] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:31.761 [2024-12-10 11:27:58.782014] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:31.761 [2024-12-10 11:27:58.782032] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:31.761 [2024-12-10 11:27:58.782062] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:31.761 [2024-12-10 11:27:58.782225] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:33.668 11:28:00 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75284 00:20:33.668 11:28:00 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75284 00:20:33.668 11:28:00 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75284 ']' 00:20:33.668 11:28:00 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.668 11:28:00 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.668 11:28:00 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.668 11:28:00 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:20:33.668 11:28:00 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.668 11:28:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:33.668 11:28:00 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:20:33.668 "subsystems": [ 00:20:33.668 { 00:20:33.668 "subsystem": "fsdev", 00:20:33.668 "config": [ 00:20:33.668 { 00:20:33.668 "method": "fsdev_set_opts", 00:20:33.668 "params": { 00:20:33.668 "fsdev_io_pool_size": 65535, 00:20:33.668 "fsdev_io_cache_size": 256 00:20:33.668 } 00:20:33.668 } 00:20:33.668 ] 00:20:33.668 }, 00:20:33.668 { 00:20:33.668 "subsystem": "keyring", 00:20:33.668 "config": [] 00:20:33.668 }, 00:20:33.668 { 00:20:33.668 "subsystem": "iobuf", 00:20:33.668 "config": [ 00:20:33.668 { 00:20:33.668 "method": "iobuf_set_options", 00:20:33.668 "params": { 00:20:33.668 "small_pool_count": 8192, 00:20:33.668 "large_pool_count": 1024, 00:20:33.668 "small_bufsize": 8192, 00:20:33.668 "large_bufsize": 135168, 00:20:33.668 "enable_numa": false 00:20:33.668 } 00:20:33.668 } 00:20:33.668 ] 00:20:33.668 }, 00:20:33.668 { 00:20:33.668 "subsystem": "sock", 00:20:33.668 "config": [ 00:20:33.668 { 00:20:33.668 "method": "sock_set_default_impl", 00:20:33.668 "params": { 00:20:33.668 "impl_name": "posix" 00:20:33.668 } 00:20:33.668 }, 00:20:33.668 { 00:20:33.668 "method": "sock_impl_set_options", 00:20:33.668 "params": { 00:20:33.668 "impl_name": "ssl", 00:20:33.668 "recv_buf_size": 4096, 00:20:33.668 "send_buf_size": 4096, 00:20:33.668 "enable_recv_pipe": true, 00:20:33.668 "enable_quickack": false, 00:20:33.668 "enable_placement_id": 0, 00:20:33.668 "enable_zerocopy_send_server": true, 00:20:33.668 "enable_zerocopy_send_client": false, 00:20:33.668 "zerocopy_threshold": 0, 00:20:33.668 "tls_version": 0, 00:20:33.668 "enable_ktls": false 00:20:33.668 } 00:20:33.668 }, 00:20:33.668 { 00:20:33.668 "method": "sock_impl_set_options", 00:20:33.668 "params": { 00:20:33.668 "impl_name": "posix", 00:20:33.668 "recv_buf_size": 2097152, 00:20:33.668 "send_buf_size": 2097152, 00:20:33.668 "enable_recv_pipe": true, 00:20:33.668 "enable_quickack": false, 00:20:33.668 "enable_placement_id": 0, 00:20:33.668 "enable_zerocopy_send_server": true, 00:20:33.668 "enable_zerocopy_send_client": false, 00:20:33.668 "zerocopy_threshold": 0, 00:20:33.668 "tls_version": 0, 00:20:33.668 "enable_ktls": false 00:20:33.668 } 00:20:33.668 } 00:20:33.668 ] 00:20:33.668 }, 00:20:33.668 { 00:20:33.668 "subsystem": "vmd", 00:20:33.668 "config": [] 00:20:33.668 }, 00:20:33.668 { 00:20:33.668 "subsystem": "accel", 00:20:33.668 "config": [ 00:20:33.668 { 00:20:33.668 "method": "accel_set_options", 00:20:33.668 "params": { 00:20:33.668 "small_cache_size": 128, 00:20:33.668 "large_cache_size": 16, 00:20:33.668 "task_count": 2048, 00:20:33.668 "sequence_count": 2048, 00:20:33.668 "buf_count": 2048 00:20:33.668 } 00:20:33.668 } 00:20:33.668 ] 00:20:33.668 }, 00:20:33.668 { 00:20:33.668 "subsystem": "bdev", 00:20:33.668 "config": [ 00:20:33.668 { 00:20:33.668 "method": "bdev_set_options", 00:20:33.668 "params": { 00:20:33.668 "bdev_io_pool_size": 65535, 00:20:33.668 "bdev_io_cache_size": 256, 00:20:33.668 "bdev_auto_examine": true, 00:20:33.668 "iobuf_small_cache_size": 128, 00:20:33.669 "iobuf_large_cache_size": 16 00:20:33.669 } 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "method": "bdev_raid_set_options", 00:20:33.669 "params": { 00:20:33.669 "process_window_size_kb": 1024, 00:20:33.669 "process_max_bandwidth_mb_sec": 0 00:20:33.669 } 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "method": "bdev_iscsi_set_options", 00:20:33.669 "params": { 00:20:33.669 "timeout_sec": 30 00:20:33.669 } 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "method": "bdev_nvme_set_options", 00:20:33.669 "params": { 00:20:33.669 "action_on_timeout": "none", 00:20:33.669 "timeout_us": 0, 00:20:33.669 "timeout_admin_us": 0, 00:20:33.669 "keep_alive_timeout_ms": 10000, 00:20:33.669 "arbitration_burst": 0, 00:20:33.669 "low_priority_weight": 0, 00:20:33.669 "medium_priority_weight": 0, 00:20:33.669 "high_priority_weight": 0, 00:20:33.669 "nvme_adminq_poll_period_us": 10000, 00:20:33.669 "nvme_ioq_poll_period_us": 0, 00:20:33.669 "io_queue_requests": 0, 00:20:33.669 "delay_cmd_submit": true, 00:20:33.669 "transport_retry_count": 4, 00:20:33.669 "bdev_retry_count": 3, 00:20:33.669 "transport_ack_timeout": 0, 00:20:33.669 "ctrlr_loss_timeout_sec": 0, 00:20:33.669 "reconnect_delay_sec": 0, 00:20:33.669 "fast_io_fail_timeout_sec": 0, 00:20:33.669 "disable_auto_failback": false, 00:20:33.669 "generate_uuids": false, 00:20:33.669 "transport_tos": 0, 00:20:33.669 "nvme_error_stat": false, 00:20:33.669 "rdma_srq_size": 0, 00:20:33.669 "io_path_stat": false, 00:20:33.669 "allow_accel_sequence": false, 00:20:33.669 "rdma_max_cq_size": 0, 00:20:33.669 "rdma_cm_event_timeout_ms": 0, 00:20:33.669 "dhchap_digests": [ 00:20:33.669 "sha256", 00:20:33.669 "sha384", 00:20:33.669 "sha512" 00:20:33.669 ], 00:20:33.669 "dhchap_dhgroups": [ 00:20:33.669 "null", 00:20:33.669 "ffdhe2048", 00:20:33.669 "ffdhe3072", 00:20:33.669 "ffdhe4096", 00:20:33.669 "ffdhe6144", 00:20:33.669 "ffdhe8192" 00:20:33.669 ] 00:20:33.669 } 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "method": "bdev_nvme_set_hotplug", 00:20:33.669 "params": { 00:20:33.669 "period_us": 100000, 00:20:33.669 "enable": false 00:20:33.669 } 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "method": "bdev_malloc_create", 00:20:33.669 "params": { 00:20:33.669 "name": "malloc0", 00:20:33.669 "num_blocks": 8192, 00:20:33.669 "block_size": 4096, 00:20:33.669 "physical_block_size": 4096, 00:20:33.669 "uuid": "bb245ac1-67bf-4977-9a80-9a17e46a7dc4", 00:20:33.669 "optimal_io_boundary": 0, 00:20:33.669 "md_size": 0, 00:20:33.669 "dif_type": 0, 00:20:33.669 "dif_is_head_of_md": false, 00:20:33.669 "dif_pi_format": 0 00:20:33.669 } 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "method": "bdev_wait_for_examine" 00:20:33.669 } 00:20:33.669 ] 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "subsystem": "scsi", 00:20:33.669 "config": null 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "subsystem": "scheduler", 00:20:33.669 "config": [ 00:20:33.669 { 00:20:33.669 "method": "framework_set_scheduler", 00:20:33.669 "params": { 00:20:33.669 "name": "static" 00:20:33.669 } 00:20:33.669 } 00:20:33.669 ] 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "subsystem": "vhost_scsi", 00:20:33.669 "config": [] 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "subsystem": "vhost_blk", 00:20:33.669 "config": [] 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "subsystem": "ublk", 00:20:33.669 "config": [ 00:20:33.669 { 00:20:33.669 "method": "ublk_create_target", 00:20:33.669 "params": { 00:20:33.669 "cpumask": "1" 00:20:33.669 } 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "method": "ublk_start_disk", 00:20:33.669 "params": { 00:20:33.669 "bdev_name": "malloc0", 00:20:33.669 "ublk_id": 0, 00:20:33.669 "num_queues": 1, 00:20:33.669 "queue_depth": 128 00:20:33.669 } 00:20:33.669 } 00:20:33.669 ] 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "subsystem": "nbd", 00:20:33.669 "config": [] 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "subsystem": "nvmf", 00:20:33.669 "config": [ 00:20:33.669 { 00:20:33.669 "method": "nvmf_set_config", 00:20:33.669 "params": { 00:20:33.669 "discovery_filter": "match_any", 00:20:33.669 "admin_cmd_passthru": { 00:20:33.669 "identify_ctrlr": false 00:20:33.669 }, 00:20:33.669 "dhchap_digests": [ 00:20:33.669 "sha256", 00:20:33.669 "sha384", 00:20:33.669 "sha512" 00:20:33.669 ], 00:20:33.669 "dhchap_dhgroups": [ 00:20:33.669 "null", 00:20:33.669 "ffdhe2048", 00:20:33.669 "ffdhe3072", 00:20:33.669 "ffdhe4096", 00:20:33.669 "ffdhe6144", 00:20:33.669 "ffdhe8192" 00:20:33.669 ] 00:20:33.669 } 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "method": "nvmf_set_max_subsystems", 00:20:33.669 "params": { 00:20:33.669 "max_subsystems": 1024 00:20:33.669 } 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "method": "nvmf_set_crdt", 00:20:33.669 "params": { 00:20:33.669 "crdt1": 0, 00:20:33.669 "crdt2": 0, 00:20:33.669 "crdt3": 0 00:20:33.669 } 00:20:33.669 } 00:20:33.669 ] 00:20:33.669 }, 00:20:33.669 { 00:20:33.669 "subsystem": "iscsi", 00:20:33.669 "config": [ 00:20:33.669 { 00:20:33.669 "method": "iscsi_set_options", 00:20:33.669 "params": { 00:20:33.669 "node_base": "iqn.2016-06.io.spdk", 00:20:33.669 "max_sessions": 128, 00:20:33.669 "max_connections_per_session": 2, 00:20:33.669 "max_queue_depth": 64, 00:20:33.669 "default_time2wait": 2, 00:20:33.669 "default_time2retain": 20, 00:20:33.669 "first_burst_length": 8192, 00:20:33.669 "immediate_data": true, 00:20:33.669 "allow_duplicated_isid": false, 00:20:33.669 "error_recovery_level": 0, 00:20:33.669 "nop_timeout": 60, 00:20:33.669 "nop_in_interval": 30, 00:20:33.669 "disable_chap": false, 00:20:33.669 "require_chap": false, 00:20:33.669 "mutual_chap": false, 00:20:33.669 "chap_group": 0, 00:20:33.669 "max_large_datain_per_connection": 64, 00:20:33.669 "max_r2t_per_connection": 4, 00:20:33.669 "pdu_pool_size": 36864, 00:20:33.669 "immediate_data_pool_size": 16384, 00:20:33.669 "data_out_pool_size": 2048 00:20:33.669 } 00:20:33.669 } 00:20:33.669 ] 00:20:33.669 } 00:20:33.669 ] 00:20:33.669 }' 00:20:33.928 [2024-12-10 11:28:00.814320] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:20:33.928 [2024-12-10 11:28:00.814967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75284 ] 00:20:33.928 [2024-12-10 11:28:00.995575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.188 [2024-12-10 11:28:01.125957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.566 [2024-12-10 11:28:02.288936] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:35.566 [2024-12-10 11:28:02.290167] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:35.566 [2024-12-10 11:28:02.297085] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:35.566 [2024-12-10 11:28:02.297178] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:35.566 [2024-12-10 11:28:02.297193] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:35.566 [2024-12-10 11:28:02.297202] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:35.566 [2024-12-10 11:28:02.306044] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:35.566 [2024-12-10 11:28:02.306068] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:35.566 [2024-12-10 11:28:02.312944] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:35.566 [2024-12-10 11:28:02.313044] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:35.566 [2024-12-10 11:28:02.329941] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75284 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75284 ']' 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75284 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75284 00:20:35.566 killing process with pid 75284 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75284' 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75284 00:20:35.566 11:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75284 00:20:36.944 [2024-12-10 11:28:03.958463] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:36.944 [2024-12-10 11:28:04.001965] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:36.944 [2024-12-10 11:28:04.002091] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:36.944 [2024-12-10 11:28:04.012959] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:36.944 [2024-12-10 11:28:04.013010] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:36.944 [2024-12-10 11:28:04.013020] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:36.944 [2024-12-10 11:28:04.013049] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:36.944 [2024-12-10 11:28:04.013203] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:38.863 11:28:05 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:20:38.864 00:20:38.864 real 0m10.456s 00:20:38.864 user 0m7.744s 00:20:38.864 sys 0m3.394s 00:20:38.864 ************************************ 00:20:38.864 END TEST test_save_ublk_config 00:20:38.864 ************************************ 00:20:38.864 11:28:05 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.864 11:28:05 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:38.864 11:28:05 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75370 00:20:38.864 11:28:05 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:38.864 11:28:05 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:38.864 11:28:05 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75370 00:20:38.864 11:28:05 ublk -- common/autotest_common.sh@835 -- # '[' -z 75370 ']' 00:20:38.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.864 11:28:05 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.864 11:28:05 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.864 11:28:05 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.864 11:28:05 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.864 11:28:05 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:39.123 [2024-12-10 11:28:05.979880] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:20:39.123 [2024-12-10 11:28:05.980259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75370 ] 00:20:39.123 [2024-12-10 11:28:06.164684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:39.385 [2024-12-10 11:28:06.271390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.385 [2024-12-10 11:28:06.271428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.333 11:28:07 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.333 11:28:07 ublk -- common/autotest_common.sh@868 -- # return 0 00:20:40.333 11:28:07 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:20:40.333 11:28:07 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:40.333 11:28:07 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.333 11:28:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.333 ************************************ 00:20:40.333 START TEST test_create_ublk 00:20:40.333 ************************************ 00:20:40.333 11:28:07 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:20:40.333 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:20:40.334 11:28:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.334 11:28:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.334 [2024-12-10 11:28:07.141962] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:40.334 [2024-12-10 11:28:07.144608] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:40.334 11:28:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.334 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:20:40.334 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:20:40.334 11:28:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.334 11:28:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.334 11:28:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.334 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:20:40.334 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:40.334 11:28:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.334 11:28:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.334 [2024-12-10 11:28:07.434120] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:40.334 [2024-12-10 11:28:07.434557] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:40.334 [2024-12-10 11:28:07.434578] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:40.334 [2024-12-10 11:28:07.434587] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:40.334 [2024-12-10 11:28:07.443234] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:40.334 [2024-12-10 11:28:07.443263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:40.593 [2024-12-10 11:28:07.449974] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:40.594 [2024-12-10 11:28:07.450547] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:40.594 [2024-12-10 11:28:07.466005] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:40.594 11:28:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.594 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:20:40.594 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:20:40.594 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:20:40.594 11:28:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.594 11:28:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.594 11:28:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.594 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:20:40.594 { 00:20:40.594 "ublk_device": "/dev/ublkb0", 00:20:40.594 "id": 0, 00:20:40.594 "queue_depth": 512, 00:20:40.594 "num_queues": 4, 00:20:40.594 "bdev_name": "Malloc0" 00:20:40.594 } 00:20:40.594 ]' 00:20:40.594 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:20:40.594 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:40.594 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:20:40.594 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:20:40.594 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:20:40.594 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:20:40.594 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:20:40.594 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:20:40.594 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:20:40.594 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:40.594 11:28:07 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:20:40.594 11:28:07 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:20:40.594 11:28:07 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:20:40.594 11:28:07 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:20:40.594 11:28:07 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:20:40.594 11:28:07 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:20:40.594 11:28:07 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:20:40.594 11:28:07 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:20:40.594 11:28:07 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:20:40.594 11:28:07 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:40.594 11:28:07 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:40.594 11:28:07 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:20:40.853 fio: verification read phase will never start because write phase uses all of runtime 00:20:40.853 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:20:40.853 fio-3.35 00:20:40.853 Starting 1 process 00:20:50.840 00:20:50.840 fio_test: (groupid=0, jobs=1): err= 0: pid=75422: Tue Dec 10 11:28:17 2024 00:20:50.840 write: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(464MiB/10001msec); 0 zone resets 00:20:50.840 clat (usec): min=41, max=4047, avg=83.40, stdev=106.88 00:20:50.840 lat (usec): min=41, max=4048, avg=83.85, stdev=106.89 00:20:50.840 clat percentiles (usec): 00:20:50.840 | 1.00th=[ 44], 5.00th=[ 69], 10.00th=[ 73], 20.00th=[ 75], 00:20:50.840 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 81], 00:20:50.840 | 70.00th=[ 82], 80.00th=[ 84], 90.00th=[ 87], 95.00th=[ 91], 00:20:50.840 | 99.00th=[ 106], 99.50th=[ 127], 99.90th=[ 2343], 99.95th=[ 2900], 00:20:50.840 | 99.99th=[ 3687] 00:20:50.840 bw ( KiB/s): min=46160, max=60918, per=100.00%, avg=47611.68, stdev=3252.19, samples=19 00:20:50.840 iops : min=11540, max=15229, avg=11902.89, stdev=812.93, samples=19 00:20:50.840 lat (usec) : 50=3.83%, 100=94.62%, 250=1.28%, 500=0.03%, 750=0.02% 00:20:50.840 lat (usec) : 1000=0.02% 00:20:50.840 lat (msec) : 2=0.08%, 4=0.12%, 10=0.01% 00:20:50.840 cpu : usr=2.33%, sys=9.39%, ctx=118762, majf=0, minf=795 00:20:50.840 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:50.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.840 issued rwts: total=0,118770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.840 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:50.840 00:20:50.840 Run status group 0 (all jobs): 00:20:50.840 WRITE: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=464MiB (486MB), run=10001-10001msec 00:20:50.840 00:20:50.840 Disk stats (read/write): 00:20:50.840 ublkb0: ios=0/117575, merge=0/0, ticks=0/8656, in_queue=8656, util=99.12% 00:20:50.840 11:28:17 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:20:50.840 11:28:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.840 11:28:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:51.100 [2024-12-10 11:28:17.953475] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:51.100 [2024-12-10 11:28:17.982558] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:51.100 [2024-12-10 11:28:17.983476] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:51.100 [2024-12-10 11:28:17.988979] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:51.100 [2024-12-10 11:28:17.989383] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:51.100 [2024-12-10 11:28:17.989409] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:51.100 11:28:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.100 11:28:17 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:20:51.100 11:28:17 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:20:51.100 11:28:17 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:20:51.100 11:28:17 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:51.100 11:28:17 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.100 11:28:17 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:51.100 11:28:17 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.100 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:20:51.100 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.100 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:51.100 [2024-12-10 11:28:18.013081] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:20:51.100 request: 00:20:51.100 { 00:20:51.100 "ublk_id": 0, 00:20:51.100 "method": "ublk_stop_disk", 00:20:51.100 "req_id": 1 00:20:51.100 } 00:20:51.100 Got JSON-RPC error response 00:20:51.100 response: 00:20:51.100 { 00:20:51.100 "code": -19, 00:20:51.100 "message": "No such device" 00:20:51.100 } 00:20:51.100 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:51.100 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:20:51.100 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:51.100 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:51.100 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:51.100 11:28:18 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:20:51.100 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.100 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:51.100 [2024-12-10 11:28:18.036058] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:51.100 [2024-12-10 11:28:18.044841] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:51.100 [2024-12-10 11:28:18.044881] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:51.100 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.100 11:28:18 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:51.100 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.100 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:51.669 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.669 11:28:18 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:20:51.669 11:28:18 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:51.669 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.669 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:51.669 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.669 11:28:18 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:51.669 11:28:18 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:20:52.026 11:28:18 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:52.026 11:28:18 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:52.026 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.026 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:52.026 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.026 11:28:18 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:52.026 11:28:18 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:20:52.026 ************************************ 00:20:52.026 END TEST test_create_ublk 00:20:52.026 ************************************ 00:20:52.026 11:28:18 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:52.026 00:20:52.026 real 0m11.721s 00:20:52.026 user 0m0.601s 00:20:52.026 sys 0m1.081s 00:20:52.026 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.026 11:28:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:52.026 11:28:18 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:20:52.026 11:28:18 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:52.026 11:28:18 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.026 11:28:18 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:52.026 ************************************ 00:20:52.026 START TEST test_create_multi_ublk 00:20:52.026 ************************************ 00:20:52.026 11:28:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:20:52.026 11:28:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:20:52.026 11:28:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.026 11:28:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:52.026 [2024-12-10 11:28:18.938944] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:52.026 [2024-12-10 11:28:18.941555] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:52.026 11:28:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.026 11:28:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:20:52.026 11:28:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:20:52.026 11:28:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:52.026 11:28:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:20:52.026 11:28:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.026 11:28:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:52.284 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.284 11:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:20:52.284 11:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:52.284 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.284 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:52.284 [2024-12-10 11:28:19.218102] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:52.285 [2024-12-10 11:28:19.218596] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:52.285 [2024-12-10 11:28:19.218613] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:52.285 [2024-12-10 11:28:19.218627] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:52.285 [2024-12-10 11:28:19.233963] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:52.285 [2024-12-10 11:28:19.233993] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:52.285 [2024-12-10 11:28:19.241982] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:52.285 [2024-12-10 11:28:19.242537] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:52.285 [2024-12-10 11:28:19.260986] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:52.285 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.285 11:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:20:52.285 11:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:52.285 11:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:20:52.285 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.285 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:52.544 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.544 11:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:20:52.544 11:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:20:52.544 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.544 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:52.544 [2024-12-10 11:28:19.546101] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:20:52.544 [2024-12-10 11:28:19.546582] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:20:52.544 [2024-12-10 11:28:19.546610] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:52.544 [2024-12-10 11:28:19.546624] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:52.544 [2024-12-10 11:28:19.554020] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:52.544 [2024-12-10 11:28:19.554046] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:52.544 [2024-12-10 11:28:19.561977] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:52.544 [2024-12-10 11:28:19.562598] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:52.544 [2024-12-10 11:28:19.571048] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:52.544 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.544 11:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:20:52.544 11:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:52.544 11:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:20:52.544 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.544 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:52.804 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.804 11:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:20:52.804 11:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:20:52.804 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.804 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:52.804 [2024-12-10 11:28:19.854081] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:20:52.804 [2024-12-10 11:28:19.854541] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:20:52.804 [2024-12-10 11:28:19.854558] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:20:52.804 [2024-12-10 11:28:19.854569] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:20:52.804 [2024-12-10 11:28:19.862022] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:52.804 [2024-12-10 11:28:19.862057] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:52.804 [2024-12-10 11:28:19.870001] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:52.804 [2024-12-10 11:28:19.870570] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:20:52.804 [2024-12-10 11:28:19.879067] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:20:52.804 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.804 11:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:20:52.804 11:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:52.804 11:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:20:52.804 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.804 11:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:53.063 11:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.063 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:20:53.063 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:20:53.063 11:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.063 11:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:53.063 [2024-12-10 11:28:20.171118] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:20:53.063 [2024-12-10 11:28:20.171610] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:20:53.063 [2024-12-10 11:28:20.171636] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:20:53.063 [2024-12-10 11:28:20.171651] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:20:53.323 [2024-12-10 11:28:20.179013] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:53.323 [2024-12-10 11:28:20.179039] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:53.323 [2024-12-10 11:28:20.186976] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:53.323 [2024-12-10 11:28:20.187575] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:20:53.323 [2024-12-10 11:28:20.196009] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:20:53.323 11:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.323 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:20:53.323 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:20:53.323 11:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.323 11:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:53.323 11:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.323 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:20:53.323 { 00:20:53.323 "ublk_device": "/dev/ublkb0", 00:20:53.323 "id": 0, 00:20:53.323 "queue_depth": 512, 00:20:53.323 "num_queues": 4, 00:20:53.323 "bdev_name": "Malloc0" 00:20:53.323 }, 00:20:53.323 { 00:20:53.323 "ublk_device": "/dev/ublkb1", 00:20:53.323 "id": 1, 00:20:53.323 "queue_depth": 512, 00:20:53.323 "num_queues": 4, 00:20:53.323 "bdev_name": "Malloc1" 00:20:53.323 }, 00:20:53.323 { 00:20:53.323 "ublk_device": "/dev/ublkb2", 00:20:53.324 "id": 2, 00:20:53.324 "queue_depth": 512, 00:20:53.324 "num_queues": 4, 00:20:53.324 "bdev_name": "Malloc2" 00:20:53.324 }, 00:20:53.324 { 00:20:53.324 "ublk_device": "/dev/ublkb3", 00:20:53.324 "id": 3, 00:20:53.324 "queue_depth": 512, 00:20:53.324 "num_queues": 4, 00:20:53.324 "bdev_name": "Malloc3" 00:20:53.324 } 00:20:53.324 ]' 00:20:53.324 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:20:53.324 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:53.324 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:20:53.324 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:53.324 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:20:53.324 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:20:53.324 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:20:53.324 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:53.324 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:20:53.324 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:53.324 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:20:53.583 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:20:53.843 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:20:53.843 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:20:53.843 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:53.843 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:20:53.843 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:53.843 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:20:53.843 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:20:53.843 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:53.843 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:20:53.843 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:20:53.843 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:20:53.843 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:20:53.843 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:20:54.102 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:54.102 11:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:54.102 [2024-12-10 11:28:21.065073] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:54.102 [2024-12-10 11:28:21.097525] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:54.102 [2024-12-10 11:28:21.099360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:54.102 [2024-12-10 11:28:21.104010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:54.102 [2024-12-10 11:28:21.104364] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:54.102 [2024-12-10 11:28:21.104378] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:54.102 [2024-12-10 11:28:21.126098] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:54.102 [2024-12-10 11:28:21.158052] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:54.102 [2024-12-10 11:28:21.159664] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:54.102 [2024-12-10 11:28:21.164995] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:54.102 [2024-12-10 11:28:21.165385] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:54.102 [2024-12-10 11:28:21.165408] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:20:54.102 11:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.103 11:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:54.103 [2024-12-10 11:28:21.173049] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:20:54.103 [2024-12-10 11:28:21.205064] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:54.103 [2024-12-10 11:28:21.206571] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:20:54.103 [2024-12-10 11:28:21.212962] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:54.103 [2024-12-10 11:28:21.213375] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:20:54.103 [2024-12-10 11:28:21.213393] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:20:54.362 11:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.362 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:54.362 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:20:54.362 11:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.362 11:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:54.362 [2024-12-10 11:28:21.228050] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:20:54.362 [2024-12-10 11:28:21.261578] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:54.362 [2024-12-10 11:28:21.262508] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:20:54.362 [2024-12-10 11:28:21.268000] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:54.362 [2024-12-10 11:28:21.268362] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:20:54.362 [2024-12-10 11:28:21.268379] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:20:54.362 11:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.362 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:20:54.362 [2024-12-10 11:28:21.462026] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:54.362 [2024-12-10 11:28:21.470925] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:54.362 [2024-12-10 11:28:21.470963] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:54.621 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:20:54.621 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:54.621 11:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:54.621 11:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.621 11:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:55.189 11:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.189 11:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:55.189 11:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:55.189 11:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.189 11:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:55.449 11:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.449 11:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:55.449 11:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:20:55.449 11:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.449 11:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:56.017 11:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.017 11:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:56.017 11:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:20:56.017 11:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.017 11:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:20:56.277 ************************************ 00:20:56.277 END TEST test_create_multi_ublk 00:20:56.277 ************************************ 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:56.277 00:20:56.277 real 0m4.434s 00:20:56.277 user 0m0.976s 00:20:56.277 sys 0m0.219s 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.277 11:28:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:56.536 11:28:23 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:56.536 11:28:23 ublk -- ublk/ublk.sh@147 -- # cleanup 00:20:56.536 11:28:23 ublk -- ublk/ublk.sh@130 -- # killprocess 75370 00:20:56.536 11:28:23 ublk -- common/autotest_common.sh@954 -- # '[' -z 75370 ']' 00:20:56.536 11:28:23 ublk -- common/autotest_common.sh@958 -- # kill -0 75370 00:20:56.536 11:28:23 ublk -- common/autotest_common.sh@959 -- # uname 00:20:56.536 11:28:23 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.536 11:28:23 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75370 00:20:56.536 killing process with pid 75370 00:20:56.536 11:28:23 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.536 11:28:23 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.536 11:28:23 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75370' 00:20:56.536 11:28:23 ublk -- common/autotest_common.sh@973 -- # kill 75370 00:20:56.536 11:28:23 ublk -- common/autotest_common.sh@978 -- # wait 75370 00:20:57.473 [2024-12-10 11:28:24.547090] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:57.473 [2024-12-10 11:28:24.547154] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:58.852 00:20:58.852 real 0m30.658s 00:20:58.852 user 0m43.459s 00:20:58.852 sys 0m10.203s 00:20:58.852 11:28:25 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:58.852 11:28:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:58.852 ************************************ 00:20:58.852 END TEST ublk 00:20:58.852 ************************************ 00:20:58.852 11:28:25 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:58.852 11:28:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:58.852 11:28:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:58.852 11:28:25 -- common/autotest_common.sh@10 -- # set +x 00:20:58.852 ************************************ 00:20:58.852 START TEST ublk_recovery 00:20:58.852 ************************************ 00:20:58.852 11:28:25 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:58.852 * Looking for test storage... 00:20:58.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:58.852 11:28:25 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:59.112 11:28:25 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:20:59.112 11:28:25 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:59.112 11:28:26 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:59.112 11:28:26 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:20:59.112 11:28:26 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.112 11:28:26 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:59.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.112 --rc genhtml_branch_coverage=1 00:20:59.112 --rc genhtml_function_coverage=1 00:20:59.112 --rc genhtml_legend=1 00:20:59.112 --rc geninfo_all_blocks=1 00:20:59.112 --rc geninfo_unexecuted_blocks=1 00:20:59.112 00:20:59.112 ' 00:20:59.112 11:28:26 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:59.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.112 --rc genhtml_branch_coverage=1 00:20:59.112 --rc genhtml_function_coverage=1 00:20:59.112 --rc genhtml_legend=1 00:20:59.112 --rc geninfo_all_blocks=1 00:20:59.112 --rc geninfo_unexecuted_blocks=1 00:20:59.112 00:20:59.112 ' 00:20:59.112 11:28:26 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:59.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.112 --rc genhtml_branch_coverage=1 00:20:59.112 --rc genhtml_function_coverage=1 00:20:59.112 --rc genhtml_legend=1 00:20:59.112 --rc geninfo_all_blocks=1 00:20:59.112 --rc geninfo_unexecuted_blocks=1 00:20:59.112 00:20:59.112 ' 00:20:59.112 11:28:26 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:59.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.112 --rc genhtml_branch_coverage=1 00:20:59.112 --rc genhtml_function_coverage=1 00:20:59.112 --rc genhtml_legend=1 00:20:59.112 --rc geninfo_all_blocks=1 00:20:59.112 --rc geninfo_unexecuted_blocks=1 00:20:59.112 00:20:59.112 ' 00:20:59.113 11:28:26 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:59.113 11:28:26 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:59.113 11:28:26 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:59.113 11:28:26 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:59.113 11:28:26 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:59.113 11:28:26 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:59.113 11:28:26 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:59.113 11:28:26 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:59.113 11:28:26 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:59.113 11:28:26 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:20:59.113 11:28:26 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75798 00:20:59.113 11:28:26 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:59.113 11:28:26 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.113 11:28:26 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75798 00:20:59.113 11:28:26 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75798 ']' 00:20:59.113 11:28:26 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.113 11:28:26 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.113 11:28:26 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.113 11:28:26 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.113 11:28:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.113 [2024-12-10 11:28:26.180337] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:20:59.113 [2024-12-10 11:28:26.180685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75798 ] 00:20:59.372 [2024-12-10 11:28:26.359966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:59.372 [2024-12-10 11:28:26.461637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.372 [2024-12-10 11:28:26.461665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.309 11:28:27 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.309 11:28:27 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:21:00.309 11:28:27 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:21:00.309 11:28:27 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.309 11:28:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.309 [2024-12-10 11:28:27.300939] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:00.309 [2024-12-10 11:28:27.303381] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:00.309 11:28:27 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.309 11:28:27 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:21:00.309 11:28:27 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.309 11:28:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.569 malloc0 00:21:00.569 11:28:27 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.569 11:28:27 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:21:00.569 11:28:27 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.569 11:28:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.569 [2024-12-10 11:28:27.445111] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:21:00.569 [2024-12-10 11:28:27.445251] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:21:00.569 [2024-12-10 11:28:27.445266] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:00.569 [2024-12-10 11:28:27.445274] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:21:00.569 [2024-12-10 11:28:27.452992] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:00.569 [2024-12-10 11:28:27.453017] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:00.569 [2024-12-10 11:28:27.461040] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:00.569 [2024-12-10 11:28:27.461232] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:21:00.569 [2024-12-10 11:28:27.476969] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:21:00.569 1 00:21:00.569 11:28:27 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.569 11:28:27 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:21:01.507 11:28:28 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75833 00:21:01.507 11:28:28 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:21:01.507 11:28:28 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:21:01.507 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:01.507 fio-3.35 00:21:01.507 Starting 1 process 00:21:06.785 11:28:33 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75798 00:21:06.785 11:28:33 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:21:12.070 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75798 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:21:12.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.070 11:28:38 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75943 00:21:12.070 11:28:38 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:12.070 11:28:38 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:12.070 11:28:38 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75943 00:21:12.070 11:28:38 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75943 ']' 00:21:12.070 11:28:38 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.070 11:28:38 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.070 11:28:38 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.070 11:28:38 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.070 11:28:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.070 [2024-12-10 11:28:38.615812] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:21:12.070 [2024-12-10 11:28:38.616188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75943 ] 00:21:12.070 [2024-12-10 11:28:38.796418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:12.070 [2024-12-10 11:28:38.905561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.070 [2024-12-10 11:28:38.905587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.639 11:28:39 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.639 11:28:39 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:21:12.639 11:28:39 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:21:12.639 11:28:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.639 11:28:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.639 [2024-12-10 11:28:39.747937] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:12.639 [2024-12-10 11:28:39.750673] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:12.898 11:28:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.898 11:28:39 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:21:12.899 11:28:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.899 11:28:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.899 malloc0 00:21:12.899 11:28:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.899 11:28:39 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:21:12.899 11:28:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.899 11:28:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.899 [2024-12-10 11:28:39.890084] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:21:12.899 [2024-12-10 11:28:39.890128] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:12.899 [2024-12-10 11:28:39.890141] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:21:12.899 [2024-12-10 11:28:39.897988] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:21:12.899 [2024-12-10 11:28:39.898013] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:21:12.899 [2024-12-10 11:28:39.898023] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:21:12.899 [2024-12-10 11:28:39.898145] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:21:12.899 1 00:21:12.899 11:28:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.899 11:28:39 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75833 00:21:12.899 [2024-12-10 11:28:39.905992] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:21:12.899 [2024-12-10 11:28:39.912659] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:21:12.899 [2024-12-10 11:28:39.920155] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:21:12.899 [2024-12-10 11:28:39.920181] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:22:09.145 00:22:09.145 fio_test: (groupid=0, jobs=1): err= 0: pid=75838: Tue Dec 10 11:29:28 2024 00:22:09.145 read: IOPS=18.2k, BW=71.1MiB/s (74.5MB/s)(4264MiB/60003msec) 00:22:09.145 slat (usec): min=3, max=1049, avg= 9.36, stdev= 2.66 00:22:09.145 clat (usec): min=1319, max=6435.9k, avg=3458.78, stdev=49244.12 00:22:09.145 lat (usec): min=1327, max=6435.9k, avg=3468.14, stdev=49244.12 00:22:09.145 clat percentiles (usec): 00:22:09.145 | 1.00th=[ 2409], 5.00th=[ 2704], 10.00th=[ 2802], 20.00th=[ 2868], 00:22:09.145 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:22:09.145 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3326], 95.00th=[ 4047], 00:22:09.145 | 99.00th=[ 5473], 99.50th=[ 6194], 99.90th=[ 7832], 99.95th=[ 8979], 00:22:09.145 | 99.99th=[13173] 00:22:09.145 bw ( KiB/s): min=25609, max=84872, per=100.00%, avg=80950.56, stdev=7697.10, samples=107 00:22:09.145 iops : min= 6402, max=21218, avg=20237.58, stdev=1924.31, samples=107 00:22:09.145 write: IOPS=18.2k, BW=71.0MiB/s (74.5MB/s)(4262MiB/60003msec); 0 zone resets 00:22:09.145 slat (usec): min=3, max=497, avg= 9.36, stdev= 2.40 00:22:09.145 clat (usec): min=1243, max=6436.2k, avg=3558.77, stdev=49255.41 00:22:09.145 lat (usec): min=1252, max=6436.2k, avg=3568.13, stdev=49255.41 00:22:09.145 clat percentiles (usec): 00:22:09.145 | 1.00th=[ 2474], 5.00th=[ 2671], 10.00th=[ 2835], 20.00th=[ 2999], 00:22:09.145 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3130], 00:22:09.145 | 70.00th=[ 3163], 80.00th=[ 3228], 90.00th=[ 3392], 95.00th=[ 4080], 00:22:09.145 | 99.00th=[ 5473], 99.50th=[ 6259], 99.90th=[ 7832], 99.95th=[ 8979], 00:22:09.145 | 99.99th=[13173] 00:22:09.145 bw ( KiB/s): min=25864, max=84760, per=100.00%, avg=80896.02, stdev=7664.52, samples=107 00:22:09.145 iops : min= 6466, max=21190, avg=20223.96, stdev=1916.16, samples=107 00:22:09.145 lat (msec) : 2=0.04%, 4=94.65%, 10=5.30%, 20=0.01%, >=2000=0.01% 00:22:09.145 cpu : usr=12.59%, sys=34.22%, ctx=101494, majf=0, minf=13 00:22:09.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:22:09.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.145 issued rwts: total=1091505,1091014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.145 00:22:09.145 Run status group 0 (all jobs): 00:22:09.145 READ: bw=71.1MiB/s (74.5MB/s), 71.1MiB/s-71.1MiB/s (74.5MB/s-74.5MB/s), io=4264MiB (4471MB), run=60003-60003msec 00:22:09.145 WRITE: bw=71.0MiB/s (74.5MB/s), 71.0MiB/s-71.0MiB/s (74.5MB/s-74.5MB/s), io=4262MiB (4469MB), run=60003-60003msec 00:22:09.145 00:22:09.145 Disk stats (read/write): 00:22:09.145 ublkb1: ios=1089217/1088704, merge=0/0, ticks=3655663/3626661, in_queue=7282324, util=99.95% 00:22:09.145 11:29:28 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.145 [2024-12-10 11:29:28.772868] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:22:09.145 [2024-12-10 11:29:28.820041] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:09.145 [2024-12-10 11:29:28.820369] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:22:09.145 [2024-12-10 11:29:28.827975] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:09.145 [2024-12-10 11:29:28.828159] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:22:09.145 [2024-12-10 11:29:28.828193] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.145 11:29:28 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.145 [2024-12-10 11:29:28.844037] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:09.145 [2024-12-10 11:29:28.851950] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:09.145 [2024-12-10 11:29:28.851986] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.145 11:29:28 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:22:09.145 11:29:28 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:22:09.145 11:29:28 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75943 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75943 ']' 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75943 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75943 00:22:09.145 killing process with pid 75943 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75943' 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75943 00:22:09.145 11:29:28 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75943 00:22:09.145 [2024-12-10 11:29:30.446425] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:09.145 [2024-12-10 11:29:30.446485] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:09.145 ************************************ 00:22:09.145 END TEST ublk_recovery 00:22:09.145 ************************************ 00:22:09.145 00:22:09.145 real 1m5.963s 00:22:09.145 user 1m52.005s 00:22:09.145 sys 0m37.060s 00:22:09.145 11:29:31 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.145 11:29:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.145 11:29:31 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:22:09.145 11:29:31 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:22:09.145 11:29:31 -- spdk/autotest.sh@260 -- # timing_exit lib 00:22:09.145 11:29:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.145 11:29:31 -- common/autotest_common.sh@10 -- # set +x 00:22:09.145 11:29:31 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:22:09.145 11:29:31 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:22:09.145 11:29:31 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:22:09.145 11:29:31 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:09.145 11:29:31 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:09.145 11:29:31 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:09.145 11:29:31 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:22:09.145 11:29:31 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:09.145 11:29:31 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:09.145 11:29:31 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:22:09.145 11:29:31 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:09.145 11:29:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:09.145 11:29:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:09.145 11:29:31 -- common/autotest_common.sh@10 -- # set +x 00:22:09.145 ************************************ 00:22:09.145 START TEST ftl 00:22:09.145 ************************************ 00:22:09.145 11:29:31 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:09.145 * Looking for test storage... 00:22:09.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:09.145 11:29:32 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:09.145 11:29:32 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:22:09.145 11:29:32 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:09.145 11:29:32 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:09.145 11:29:32 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:09.145 11:29:32 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:09.145 11:29:32 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:09.145 11:29:32 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:22:09.145 11:29:32 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:22:09.145 11:29:32 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:22:09.145 11:29:32 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:22:09.145 11:29:32 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:22:09.145 11:29:32 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:22:09.145 11:29:32 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:22:09.145 11:29:32 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:09.145 11:29:32 ftl -- scripts/common.sh@344 -- # case "$op" in 00:22:09.145 11:29:32 ftl -- scripts/common.sh@345 -- # : 1 00:22:09.145 11:29:32 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:09.145 11:29:32 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:09.145 11:29:32 ftl -- scripts/common.sh@365 -- # decimal 1 00:22:09.145 11:29:32 ftl -- scripts/common.sh@353 -- # local d=1 00:22:09.145 11:29:32 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:09.145 11:29:32 ftl -- scripts/common.sh@355 -- # echo 1 00:22:09.145 11:29:32 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:22:09.145 11:29:32 ftl -- scripts/common.sh@366 -- # decimal 2 00:22:09.145 11:29:32 ftl -- scripts/common.sh@353 -- # local d=2 00:22:09.145 11:29:32 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:09.145 11:29:32 ftl -- scripts/common.sh@355 -- # echo 2 00:22:09.145 11:29:32 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:22:09.145 11:29:32 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:09.145 11:29:32 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:09.145 11:29:32 ftl -- scripts/common.sh@368 -- # return 0 00:22:09.145 11:29:32 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:09.145 11:29:32 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:09.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.145 --rc genhtml_branch_coverage=1 00:22:09.145 --rc genhtml_function_coverage=1 00:22:09.145 --rc genhtml_legend=1 00:22:09.145 --rc geninfo_all_blocks=1 00:22:09.145 --rc geninfo_unexecuted_blocks=1 00:22:09.145 00:22:09.145 ' 00:22:09.145 11:29:32 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:09.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.145 --rc genhtml_branch_coverage=1 00:22:09.145 --rc genhtml_function_coverage=1 00:22:09.145 --rc genhtml_legend=1 00:22:09.145 --rc geninfo_all_blocks=1 00:22:09.145 --rc geninfo_unexecuted_blocks=1 00:22:09.145 00:22:09.145 ' 00:22:09.145 11:29:32 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:09.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.145 --rc genhtml_branch_coverage=1 00:22:09.145 --rc genhtml_function_coverage=1 00:22:09.146 --rc genhtml_legend=1 00:22:09.146 --rc geninfo_all_blocks=1 00:22:09.146 --rc geninfo_unexecuted_blocks=1 00:22:09.146 00:22:09.146 ' 00:22:09.146 11:29:32 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:09.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.146 --rc genhtml_branch_coverage=1 00:22:09.146 --rc genhtml_function_coverage=1 00:22:09.146 --rc genhtml_legend=1 00:22:09.146 --rc geninfo_all_blocks=1 00:22:09.146 --rc geninfo_unexecuted_blocks=1 00:22:09.146 00:22:09.146 ' 00:22:09.146 11:29:32 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:09.146 11:29:32 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:09.146 11:29:32 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:09.146 11:29:32 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:09.146 11:29:32 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:09.146 11:29:32 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:09.146 11:29:32 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:09.146 11:29:32 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:09.146 11:29:32 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:09.146 11:29:32 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:09.146 11:29:32 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:09.146 11:29:32 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:09.146 11:29:32 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:09.146 11:29:32 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:09.146 11:29:32 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:09.146 11:29:32 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:09.146 11:29:32 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:09.146 11:29:32 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:09.146 11:29:32 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:09.146 11:29:32 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:09.146 11:29:32 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:09.146 11:29:32 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:09.146 11:29:32 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:09.146 11:29:32 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:09.146 11:29:32 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:09.146 11:29:32 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:09.146 11:29:32 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:09.146 11:29:32 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:09.146 11:29:32 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:09.146 11:29:32 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:09.146 11:29:32 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:22:09.146 11:29:32 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:22:09.146 11:29:32 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:22:09.146 11:29:32 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:22:09.146 11:29:32 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:09.146 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:09.146 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:09.146 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:09.146 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:09.146 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:09.146 11:29:33 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76754 00:22:09.146 11:29:33 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:22:09.146 11:29:33 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76754 00:22:09.146 11:29:33 ftl -- common/autotest_common.sh@835 -- # '[' -z 76754 ']' 00:22:09.146 11:29:33 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.146 11:29:33 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.146 11:29:33 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.146 11:29:33 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.146 11:29:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:09.146 [2024-12-10 11:29:33.182070] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:22:09.146 [2024-12-10 11:29:33.182212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76754 ] 00:22:09.146 [2024-12-10 11:29:33.363983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.146 [2024-12-10 11:29:33.476636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.146 11:29:33 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.146 11:29:33 ftl -- common/autotest_common.sh@868 -- # return 0 00:22:09.146 11:29:33 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:22:09.146 11:29:34 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:22:09.146 11:29:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:22:09.146 11:29:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:09.146 11:29:35 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:22:09.146 11:29:35 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:22:09.146 11:29:35 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:22:09.146 11:29:35 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:22:09.146 11:29:35 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:22:09.146 11:29:35 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:22:09.146 11:29:35 ftl -- ftl/ftl.sh@50 -- # break 00:22:09.146 11:29:35 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:22:09.146 11:29:35 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:22:09.146 11:29:35 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:22:09.146 11:29:35 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:22:09.146 11:29:36 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:22:09.146 11:29:36 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:22:09.146 11:29:36 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:22:09.146 11:29:36 ftl -- ftl/ftl.sh@63 -- # break 00:22:09.146 11:29:36 ftl -- ftl/ftl.sh@66 -- # killprocess 76754 00:22:09.146 11:29:36 ftl -- common/autotest_common.sh@954 -- # '[' -z 76754 ']' 00:22:09.146 11:29:36 ftl -- common/autotest_common.sh@958 -- # kill -0 76754 00:22:09.146 11:29:36 ftl -- common/autotest_common.sh@959 -- # uname 00:22:09.146 11:29:36 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.146 11:29:36 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76754 00:22:09.146 killing process with pid 76754 00:22:09.146 11:29:36 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.146 11:29:36 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.146 11:29:36 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76754' 00:22:09.146 11:29:36 ftl -- common/autotest_common.sh@973 -- # kill 76754 00:22:09.146 11:29:36 ftl -- common/autotest_common.sh@978 -- # wait 76754 00:22:11.685 11:29:38 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:22:11.685 11:29:38 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:22:11.685 11:29:38 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:11.685 11:29:38 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.685 11:29:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:11.685 ************************************ 00:22:11.685 START TEST ftl_fio_basic 00:22:11.685 ************************************ 00:22:11.685 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:22:11.685 * Looking for test storage... 00:22:11.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:11.685 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:11.685 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:22:11.685 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:11.685 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:11.685 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:11.685 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:11.685 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:11.685 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.685 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:22:11.685 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:22:11.685 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:11.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.686 --rc genhtml_branch_coverage=1 00:22:11.686 --rc genhtml_function_coverage=1 00:22:11.686 --rc genhtml_legend=1 00:22:11.686 --rc geninfo_all_blocks=1 00:22:11.686 --rc geninfo_unexecuted_blocks=1 00:22:11.686 00:22:11.686 ' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:11.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.686 --rc genhtml_branch_coverage=1 00:22:11.686 --rc genhtml_function_coverage=1 00:22:11.686 --rc genhtml_legend=1 00:22:11.686 --rc geninfo_all_blocks=1 00:22:11.686 --rc geninfo_unexecuted_blocks=1 00:22:11.686 00:22:11.686 ' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:11.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.686 --rc genhtml_branch_coverage=1 00:22:11.686 --rc genhtml_function_coverage=1 00:22:11.686 --rc genhtml_legend=1 00:22:11.686 --rc geninfo_all_blocks=1 00:22:11.686 --rc geninfo_unexecuted_blocks=1 00:22:11.686 00:22:11.686 ' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:11.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.686 --rc genhtml_branch_coverage=1 00:22:11.686 --rc genhtml_function_coverage=1 00:22:11.686 --rc genhtml_legend=1 00:22:11.686 --rc geninfo_all_blocks=1 00:22:11.686 --rc geninfo_unexecuted_blocks=1 00:22:11.686 00:22:11.686 ' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:22:11.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76902 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76902 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76902 ']' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.686 11:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:11.686 [2024-12-10 11:29:38.763409] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:22:11.686 [2024-12-10 11:29:38.763668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76902 ] 00:22:11.946 [2024-12-10 11:29:38.945818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:11.946 [2024-12-10 11:29:39.056383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.946 [2024-12-10 11:29:39.056509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.946 [2024-12-10 11:29:39.056543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.885 11:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.885 11:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:22:12.885 11:29:39 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:12.885 11:29:39 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:22:12.885 11:29:39 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:12.885 11:29:39 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:22:12.885 11:29:39 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:22:12.885 11:29:39 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:13.145 11:29:40 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:13.145 11:29:40 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:22:13.145 11:29:40 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:13.145 11:29:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:13.145 11:29:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:13.145 11:29:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:13.145 11:29:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:13.145 11:29:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:13.404 11:29:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:13.404 { 00:22:13.404 "name": "nvme0n1", 00:22:13.404 "aliases": [ 00:22:13.404 "69eb926d-1e6d-4bc4-8830-83c36868f923" 00:22:13.404 ], 00:22:13.404 "product_name": "NVMe disk", 00:22:13.404 "block_size": 4096, 00:22:13.404 "num_blocks": 1310720, 00:22:13.404 "uuid": "69eb926d-1e6d-4bc4-8830-83c36868f923", 00:22:13.404 "numa_id": -1, 00:22:13.404 "assigned_rate_limits": { 00:22:13.404 "rw_ios_per_sec": 0, 00:22:13.404 "rw_mbytes_per_sec": 0, 00:22:13.404 "r_mbytes_per_sec": 0, 00:22:13.404 "w_mbytes_per_sec": 0 00:22:13.404 }, 00:22:13.404 "claimed": false, 00:22:13.404 "zoned": false, 00:22:13.404 "supported_io_types": { 00:22:13.404 "read": true, 00:22:13.404 "write": true, 00:22:13.404 "unmap": true, 00:22:13.404 "flush": true, 00:22:13.404 "reset": true, 00:22:13.405 "nvme_admin": true, 00:22:13.405 "nvme_io": true, 00:22:13.405 "nvme_io_md": false, 00:22:13.405 "write_zeroes": true, 00:22:13.405 "zcopy": false, 00:22:13.405 "get_zone_info": false, 00:22:13.405 "zone_management": false, 00:22:13.405 "zone_append": false, 00:22:13.405 "compare": true, 00:22:13.405 "compare_and_write": false, 00:22:13.405 "abort": true, 00:22:13.405 "seek_hole": false, 00:22:13.405 "seek_data": false, 00:22:13.405 "copy": true, 00:22:13.405 "nvme_iov_md": false 00:22:13.405 }, 00:22:13.405 "driver_specific": { 00:22:13.405 "nvme": [ 00:22:13.405 { 00:22:13.405 "pci_address": "0000:00:11.0", 00:22:13.405 "trid": { 00:22:13.405 "trtype": "PCIe", 00:22:13.405 "traddr": "0000:00:11.0" 00:22:13.405 }, 00:22:13.405 "ctrlr_data": { 00:22:13.405 "cntlid": 0, 00:22:13.405 "vendor_id": "0x1b36", 00:22:13.405 "model_number": "QEMU NVMe Ctrl", 00:22:13.405 "serial_number": "12341", 00:22:13.405 "firmware_revision": "8.0.0", 00:22:13.405 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:13.405 "oacs": { 00:22:13.405 "security": 0, 00:22:13.405 "format": 1, 00:22:13.405 "firmware": 0, 00:22:13.405 "ns_manage": 1 00:22:13.405 }, 00:22:13.405 "multi_ctrlr": false, 00:22:13.405 "ana_reporting": false 00:22:13.405 }, 00:22:13.405 "vs": { 00:22:13.405 "nvme_version": "1.4" 00:22:13.405 }, 00:22:13.405 "ns_data": { 00:22:13.405 "id": 1, 00:22:13.405 "can_share": false 00:22:13.405 } 00:22:13.405 } 00:22:13.405 ], 00:22:13.405 "mp_policy": "active_passive" 00:22:13.405 } 00:22:13.405 } 00:22:13.405 ]' 00:22:13.405 11:29:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:13.405 11:29:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:13.405 11:29:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:13.405 11:29:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:13.405 11:29:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:13.405 11:29:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:22:13.405 11:29:40 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:22:13.405 11:29:40 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:13.405 11:29:40 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:22:13.405 11:29:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:13.405 11:29:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:13.664 11:29:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:22:13.664 11:29:40 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:13.924 11:29:40 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=419abc13-2818-4e2a-b2e7-e018d2d11dca 00:22:13.924 11:29:40 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 419abc13-2818-4e2a-b2e7-e018d2d11dca 00:22:14.184 11:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=5c910bc4-2517-4557-9ac0-93f231a01ca4 00:22:14.184 11:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5c910bc4-2517-4557-9ac0-93f231a01ca4 00:22:14.184 11:29:41 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:22:14.184 11:29:41 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:14.184 11:29:41 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=5c910bc4-2517-4557-9ac0-93f231a01ca4 00:22:14.184 11:29:41 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:22:14.184 11:29:41 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 5c910bc4-2517-4557-9ac0-93f231a01ca4 00:22:14.184 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=5c910bc4-2517-4557-9ac0-93f231a01ca4 00:22:14.184 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:14.184 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:14.184 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:14.184 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5c910bc4-2517-4557-9ac0-93f231a01ca4 00:22:14.443 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:14.443 { 00:22:14.443 "name": "5c910bc4-2517-4557-9ac0-93f231a01ca4", 00:22:14.443 "aliases": [ 00:22:14.443 "lvs/nvme0n1p0" 00:22:14.443 ], 00:22:14.443 "product_name": "Logical Volume", 00:22:14.443 "block_size": 4096, 00:22:14.443 "num_blocks": 26476544, 00:22:14.443 "uuid": "5c910bc4-2517-4557-9ac0-93f231a01ca4", 00:22:14.443 "assigned_rate_limits": { 00:22:14.444 "rw_ios_per_sec": 0, 00:22:14.444 "rw_mbytes_per_sec": 0, 00:22:14.444 "r_mbytes_per_sec": 0, 00:22:14.444 "w_mbytes_per_sec": 0 00:22:14.444 }, 00:22:14.444 "claimed": false, 00:22:14.444 "zoned": false, 00:22:14.444 "supported_io_types": { 00:22:14.444 "read": true, 00:22:14.444 "write": true, 00:22:14.444 "unmap": true, 00:22:14.444 "flush": false, 00:22:14.444 "reset": true, 00:22:14.444 "nvme_admin": false, 00:22:14.444 "nvme_io": false, 00:22:14.444 "nvme_io_md": false, 00:22:14.444 "write_zeroes": true, 00:22:14.444 "zcopy": false, 00:22:14.444 "get_zone_info": false, 00:22:14.444 "zone_management": false, 00:22:14.444 "zone_append": false, 00:22:14.444 "compare": false, 00:22:14.444 "compare_and_write": false, 00:22:14.444 "abort": false, 00:22:14.444 "seek_hole": true, 00:22:14.444 "seek_data": true, 00:22:14.444 "copy": false, 00:22:14.444 "nvme_iov_md": false 00:22:14.444 }, 00:22:14.444 "driver_specific": { 00:22:14.444 "lvol": { 00:22:14.444 "lvol_store_uuid": "419abc13-2818-4e2a-b2e7-e018d2d11dca", 00:22:14.444 "base_bdev": "nvme0n1", 00:22:14.444 "thin_provision": true, 00:22:14.444 "num_allocated_clusters": 0, 00:22:14.444 "snapshot": false, 00:22:14.444 "clone": false, 00:22:14.444 "esnap_clone": false 00:22:14.444 } 00:22:14.444 } 00:22:14.444 } 00:22:14.444 ]' 00:22:14.444 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:14.444 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:14.444 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:14.444 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:14.444 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:14.444 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:22:14.444 11:29:41 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:22:14.444 11:29:41 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:22:14.444 11:29:41 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:14.703 11:29:41 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:14.703 11:29:41 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:14.703 11:29:41 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 5c910bc4-2517-4557-9ac0-93f231a01ca4 00:22:14.703 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=5c910bc4-2517-4557-9ac0-93f231a01ca4 00:22:14.703 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:14.703 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:14.703 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:14.703 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5c910bc4-2517-4557-9ac0-93f231a01ca4 00:22:14.963 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:14.963 { 00:22:14.963 "name": "5c910bc4-2517-4557-9ac0-93f231a01ca4", 00:22:14.963 "aliases": [ 00:22:14.963 "lvs/nvme0n1p0" 00:22:14.963 ], 00:22:14.963 "product_name": "Logical Volume", 00:22:14.963 "block_size": 4096, 00:22:14.963 "num_blocks": 26476544, 00:22:14.963 "uuid": "5c910bc4-2517-4557-9ac0-93f231a01ca4", 00:22:14.963 "assigned_rate_limits": { 00:22:14.963 "rw_ios_per_sec": 0, 00:22:14.963 "rw_mbytes_per_sec": 0, 00:22:14.963 "r_mbytes_per_sec": 0, 00:22:14.963 "w_mbytes_per_sec": 0 00:22:14.963 }, 00:22:14.963 "claimed": false, 00:22:14.963 "zoned": false, 00:22:14.963 "supported_io_types": { 00:22:14.963 "read": true, 00:22:14.963 "write": true, 00:22:14.963 "unmap": true, 00:22:14.963 "flush": false, 00:22:14.963 "reset": true, 00:22:14.963 "nvme_admin": false, 00:22:14.963 "nvme_io": false, 00:22:14.963 "nvme_io_md": false, 00:22:14.963 "write_zeroes": true, 00:22:14.963 "zcopy": false, 00:22:14.963 "get_zone_info": false, 00:22:14.963 "zone_management": false, 00:22:14.963 "zone_append": false, 00:22:14.963 "compare": false, 00:22:14.963 "compare_and_write": false, 00:22:14.963 "abort": false, 00:22:14.963 "seek_hole": true, 00:22:14.963 "seek_data": true, 00:22:14.963 "copy": false, 00:22:14.963 "nvme_iov_md": false 00:22:14.963 }, 00:22:14.963 "driver_specific": { 00:22:14.963 "lvol": { 00:22:14.963 "lvol_store_uuid": "419abc13-2818-4e2a-b2e7-e018d2d11dca", 00:22:14.963 "base_bdev": "nvme0n1", 00:22:14.963 "thin_provision": true, 00:22:14.963 "num_allocated_clusters": 0, 00:22:14.963 "snapshot": false, 00:22:14.963 "clone": false, 00:22:14.963 "esnap_clone": false 00:22:14.963 } 00:22:14.963 } 00:22:14.963 } 00:22:14.963 ]' 00:22:14.963 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:14.963 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:14.963 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:14.963 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:14.963 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:14.963 11:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:22:14.963 11:29:41 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:22:14.963 11:29:41 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:15.223 11:29:42 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:22:15.223 11:29:42 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:22:15.223 11:29:42 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:22:15.223 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:22:15.223 11:29:42 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 5c910bc4-2517-4557-9ac0-93f231a01ca4 00:22:15.223 11:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=5c910bc4-2517-4557-9ac0-93f231a01ca4 00:22:15.223 11:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:15.223 11:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:15.223 11:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:15.223 11:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5c910bc4-2517-4557-9ac0-93f231a01ca4 00:22:15.223 11:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:15.223 { 00:22:15.223 "name": "5c910bc4-2517-4557-9ac0-93f231a01ca4", 00:22:15.223 "aliases": [ 00:22:15.223 "lvs/nvme0n1p0" 00:22:15.223 ], 00:22:15.223 "product_name": "Logical Volume", 00:22:15.223 "block_size": 4096, 00:22:15.223 "num_blocks": 26476544, 00:22:15.223 "uuid": "5c910bc4-2517-4557-9ac0-93f231a01ca4", 00:22:15.223 "assigned_rate_limits": { 00:22:15.223 "rw_ios_per_sec": 0, 00:22:15.223 "rw_mbytes_per_sec": 0, 00:22:15.223 "r_mbytes_per_sec": 0, 00:22:15.223 "w_mbytes_per_sec": 0 00:22:15.223 }, 00:22:15.223 "claimed": false, 00:22:15.223 "zoned": false, 00:22:15.223 "supported_io_types": { 00:22:15.223 "read": true, 00:22:15.223 "write": true, 00:22:15.223 "unmap": true, 00:22:15.223 "flush": false, 00:22:15.223 "reset": true, 00:22:15.223 "nvme_admin": false, 00:22:15.223 "nvme_io": false, 00:22:15.223 "nvme_io_md": false, 00:22:15.223 "write_zeroes": true, 00:22:15.223 "zcopy": false, 00:22:15.223 "get_zone_info": false, 00:22:15.223 "zone_management": false, 00:22:15.223 "zone_append": false, 00:22:15.223 "compare": false, 00:22:15.223 "compare_and_write": false, 00:22:15.223 "abort": false, 00:22:15.223 "seek_hole": true, 00:22:15.223 "seek_data": true, 00:22:15.223 "copy": false, 00:22:15.223 "nvme_iov_md": false 00:22:15.223 }, 00:22:15.223 "driver_specific": { 00:22:15.223 "lvol": { 00:22:15.223 "lvol_store_uuid": "419abc13-2818-4e2a-b2e7-e018d2d11dca", 00:22:15.223 "base_bdev": "nvme0n1", 00:22:15.223 "thin_provision": true, 00:22:15.223 "num_allocated_clusters": 0, 00:22:15.223 "snapshot": false, 00:22:15.223 "clone": false, 00:22:15.223 "esnap_clone": false 00:22:15.223 } 00:22:15.223 } 00:22:15.223 } 00:22:15.223 ]' 00:22:15.223 11:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:15.482 11:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:15.482 11:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:15.483 11:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:15.483 11:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:15.483 11:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:22:15.483 11:29:42 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:22:15.483 11:29:42 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:22:15.483 11:29:42 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5c910bc4-2517-4557-9ac0-93f231a01ca4 -c nvc0n1p0 --l2p_dram_limit 60 00:22:15.483 [2024-12-10 11:29:42.585824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.483 [2024-12-10 11:29:42.585874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:15.483 [2024-12-10 11:29:42.585893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:15.483 [2024-12-10 11:29:42.585904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.483 [2024-12-10 11:29:42.586011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.483 [2024-12-10 11:29:42.586028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:15.483 [2024-12-10 11:29:42.586044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:15.483 [2024-12-10 11:29:42.586054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.483 [2024-12-10 11:29:42.586115] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:15.483 [2024-12-10 11:29:42.587173] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:15.483 [2024-12-10 11:29:42.587202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.483 [2024-12-10 11:29:42.587214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:15.483 [2024-12-10 11:29:42.587227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:22:15.483 [2024-12-10 11:29:42.587237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.483 [2024-12-10 11:29:42.587341] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 711b0430-db0c-4628-bb81-82ae83ab3f72 00:22:15.483 [2024-12-10 11:29:42.588834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.483 [2024-12-10 11:29:42.588876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:15.483 [2024-12-10 11:29:42.588889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:15.483 [2024-12-10 11:29:42.588902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.743 [2024-12-10 11:29:42.596461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.743 [2024-12-10 11:29:42.596504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:15.743 [2024-12-10 11:29:42.596517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.451 ms 00:22:15.743 [2024-12-10 11:29:42.596531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.743 [2024-12-10 11:29:42.596689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.743 [2024-12-10 11:29:42.596707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:15.743 [2024-12-10 11:29:42.596718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:22:15.743 [2024-12-10 11:29:42.596735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.743 [2024-12-10 11:29:42.596837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.743 [2024-12-10 11:29:42.596853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:15.743 [2024-12-10 11:29:42.596865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:15.743 [2024-12-10 11:29:42.596879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.743 [2024-12-10 11:29:42.596945] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:15.743 [2024-12-10 11:29:42.602083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.743 [2024-12-10 11:29:42.602119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:15.743 [2024-12-10 11:29:42.602135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.151 ms 00:22:15.743 [2024-12-10 11:29:42.602148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.743 [2024-12-10 11:29:42.602239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.743 [2024-12-10 11:29:42.602255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:15.743 [2024-12-10 11:29:42.602270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:15.743 [2024-12-10 11:29:42.602280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.743 [2024-12-10 11:29:42.602346] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:15.743 [2024-12-10 11:29:42.602512] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:15.743 [2024-12-10 11:29:42.602536] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:15.743 [2024-12-10 11:29:42.602551] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:15.743 [2024-12-10 11:29:42.602567] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:15.743 [2024-12-10 11:29:42.602579] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:15.743 [2024-12-10 11:29:42.602595] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:15.743 [2024-12-10 11:29:42.602606] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:15.743 [2024-12-10 11:29:42.602618] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:15.743 [2024-12-10 11:29:42.602628] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:15.743 [2024-12-10 11:29:42.602641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.743 [2024-12-10 11:29:42.602654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:15.743 [2024-12-10 11:29:42.602668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:22:15.743 [2024-12-10 11:29:42.602679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.743 [2024-12-10 11:29:42.602793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.743 [2024-12-10 11:29:42.602808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:15.743 [2024-12-10 11:29:42.602821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:15.743 [2024-12-10 11:29:42.602831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.743 [2024-12-10 11:29:42.602983] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:15.743 [2024-12-10 11:29:42.602997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:15.743 [2024-12-10 11:29:42.603014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:15.743 [2024-12-10 11:29:42.603025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.743 [2024-12-10 11:29:42.603038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:15.743 [2024-12-10 11:29:42.603048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:15.743 [2024-12-10 11:29:42.603060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:15.743 [2024-12-10 11:29:42.603070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:15.743 [2024-12-10 11:29:42.603084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:15.743 [2024-12-10 11:29:42.603094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:15.743 [2024-12-10 11:29:42.603105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:15.743 [2024-12-10 11:29:42.603117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:15.743 [2024-12-10 11:29:42.603129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:15.743 [2024-12-10 11:29:42.603139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:15.743 [2024-12-10 11:29:42.603151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:15.743 [2024-12-10 11:29:42.603162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.743 [2024-12-10 11:29:42.603176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:15.743 [2024-12-10 11:29:42.603186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:15.743 [2024-12-10 11:29:42.603198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.743 [2024-12-10 11:29:42.603208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:15.743 [2024-12-10 11:29:42.603220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:15.743 [2024-12-10 11:29:42.603229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.743 [2024-12-10 11:29:42.603241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:15.743 [2024-12-10 11:29:42.603250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:15.743 [2024-12-10 11:29:42.603262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.743 [2024-12-10 11:29:42.603271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:15.744 [2024-12-10 11:29:42.603283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:15.744 [2024-12-10 11:29:42.603292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.744 [2024-12-10 11:29:42.603303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:15.744 [2024-12-10 11:29:42.603312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:15.744 [2024-12-10 11:29:42.603325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.744 [2024-12-10 11:29:42.603335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:15.744 [2024-12-10 11:29:42.603354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:15.744 [2024-12-10 11:29:42.603382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:15.744 [2024-12-10 11:29:42.603397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:15.744 [2024-12-10 11:29:42.603406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:15.744 [2024-12-10 11:29:42.603421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:15.744 [2024-12-10 11:29:42.603431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:15.744 [2024-12-10 11:29:42.603445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:15.744 [2024-12-10 11:29:42.603454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.744 [2024-12-10 11:29:42.603468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:15.744 [2024-12-10 11:29:42.603478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:15.744 [2024-12-10 11:29:42.603492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.744 [2024-12-10 11:29:42.603502] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:15.744 [2024-12-10 11:29:42.603517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:15.744 [2024-12-10 11:29:42.603527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:15.744 [2024-12-10 11:29:42.603540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.744 [2024-12-10 11:29:42.603550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:15.744 [2024-12-10 11:29:42.603564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:15.744 [2024-12-10 11:29:42.603574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:15.744 [2024-12-10 11:29:42.603585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:15.744 [2024-12-10 11:29:42.603594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:15.744 [2024-12-10 11:29:42.603606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:15.744 [2024-12-10 11:29:42.603617] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:15.744 [2024-12-10 11:29:42.603632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:15.744 [2024-12-10 11:29:42.603644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:15.744 [2024-12-10 11:29:42.603657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:15.744 [2024-12-10 11:29:42.603668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:15.744 [2024-12-10 11:29:42.603680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:15.744 [2024-12-10 11:29:42.603691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:15.744 [2024-12-10 11:29:42.603705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:15.744 [2024-12-10 11:29:42.603716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:15.744 [2024-12-10 11:29:42.603730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:15.744 [2024-12-10 11:29:42.603740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:15.744 [2024-12-10 11:29:42.603755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:15.744 [2024-12-10 11:29:42.603765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:15.744 [2024-12-10 11:29:42.603778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:15.744 [2024-12-10 11:29:42.603792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:15.744 [2024-12-10 11:29:42.603808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:15.744 [2024-12-10 11:29:42.603819] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:15.744 [2024-12-10 11:29:42.603834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:15.744 [2024-12-10 11:29:42.603849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:15.744 [2024-12-10 11:29:42.603862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:15.744 [2024-12-10 11:29:42.603873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:15.744 [2024-12-10 11:29:42.603886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:15.744 [2024-12-10 11:29:42.603898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.744 [2024-12-10 11:29:42.603911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:15.744 [2024-12-10 11:29:42.603932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:22:15.744 [2024-12-10 11:29:42.603945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.744 [2024-12-10 11:29:42.604060] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:15.744 [2024-12-10 11:29:42.604079] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:19.947 [2024-12-10 11:29:46.577789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.947 [2024-12-10 11:29:46.577888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:19.947 [2024-12-10 11:29:46.577906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3980.182 ms 00:22:19.947 [2024-12-10 11:29:46.577919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.947 [2024-12-10 11:29:46.615413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.947 [2024-12-10 11:29:46.615484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:19.947 [2024-12-10 11:29:46.615500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.128 ms 00:22:19.947 [2024-12-10 11:29:46.615514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.947 [2024-12-10 11:29:46.615691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.947 [2024-12-10 11:29:46.615715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:19.947 [2024-12-10 11:29:46.615727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:19.947 [2024-12-10 11:29:46.615742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.947 [2024-12-10 11:29:46.672202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.947 [2024-12-10 11:29:46.672251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:19.947 [2024-12-10 11:29:46.672270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.478 ms 00:22:19.947 [2024-12-10 11:29:46.672285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.947 [2024-12-10 11:29:46.672347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.947 [2024-12-10 11:29:46.672361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:19.947 [2024-12-10 11:29:46.672373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:19.947 [2024-12-10 11:29:46.672386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.947 [2024-12-10 11:29:46.672895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.947 [2024-12-10 11:29:46.672914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:19.947 [2024-12-10 11:29:46.672925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:22:19.947 [2024-12-10 11:29:46.672959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.947 [2024-12-10 11:29:46.673114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.947 [2024-12-10 11:29:46.673134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:19.947 [2024-12-10 11:29:46.673146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:22:19.947 [2024-12-10 11:29:46.673162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.947 [2024-12-10 11:29:46.694348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.947 [2024-12-10 11:29:46.694395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:19.947 [2024-12-10 11:29:46.694410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.169 ms 00:22:19.947 [2024-12-10 11:29:46.694423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.947 [2024-12-10 11:29:46.706744] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:19.947 [2024-12-10 11:29:46.723278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.947 [2024-12-10 11:29:46.723318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:19.947 [2024-12-10 11:29:46.723356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.748 ms 00:22:19.947 [2024-12-10 11:29:46.723367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.947 [2024-12-10 11:29:46.816463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.947 [2024-12-10 11:29:46.816513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:19.947 [2024-12-10 11:29:46.816552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.176 ms 00:22:19.947 [2024-12-10 11:29:46.816563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.947 [2024-12-10 11:29:46.816789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.947 [2024-12-10 11:29:46.816806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:19.947 [2024-12-10 11:29:46.816823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:22:19.947 [2024-12-10 11:29:46.816833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.947 [2024-12-10 11:29:46.852428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.947 [2024-12-10 11:29:46.852469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:19.947 [2024-12-10 11:29:46.852509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.544 ms 00:22:19.947 [2024-12-10 11:29:46.852520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.947 [2024-12-10 11:29:46.889122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.947 [2024-12-10 11:29:46.889163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:19.947 [2024-12-10 11:29:46.889181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.589 ms 00:22:19.947 [2024-12-10 11:29:46.889192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.947 [2024-12-10 11:29:46.890033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.947 [2024-12-10 11:29:46.890059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:19.947 [2024-12-10 11:29:46.890075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:22:19.947 [2024-12-10 11:29:46.890086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.947 [2024-12-10 11:29:46.990264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.947 [2024-12-10 11:29:46.990308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:19.947 [2024-12-10 11:29:46.990330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.253 ms 00:22:19.948 [2024-12-10 11:29:46.990344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.948 [2024-12-10 11:29:47.026530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.948 [2024-12-10 11:29:47.026574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:19.948 [2024-12-10 11:29:47.026590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.094 ms 00:22:19.948 [2024-12-10 11:29:47.026601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.210 [2024-12-10 11:29:47.061777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.210 [2024-12-10 11:29:47.061818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:20.210 [2024-12-10 11:29:47.061834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.156 ms 00:22:20.210 [2024-12-10 11:29:47.061843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.210 [2024-12-10 11:29:47.098270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.210 [2024-12-10 11:29:47.098311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:20.210 [2024-12-10 11:29:47.098327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.412 ms 00:22:20.210 [2024-12-10 11:29:47.098338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.210 [2024-12-10 11:29:47.098427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.210 [2024-12-10 11:29:47.098441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:20.210 [2024-12-10 11:29:47.098462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:20.210 [2024-12-10 11:29:47.098471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.210 [2024-12-10 11:29:47.098639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.210 [2024-12-10 11:29:47.098654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:20.210 [2024-12-10 11:29:47.098668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:20.210 [2024-12-10 11:29:47.098679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.210 [2024-12-10 11:29:47.099968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4520.966 ms, result 0 00:22:20.210 { 00:22:20.210 "name": "ftl0", 00:22:20.210 "uuid": "711b0430-db0c-4628-bb81-82ae83ab3f72" 00:22:20.210 } 00:22:20.210 11:29:47 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:22:20.210 11:29:47 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:22:20.210 11:29:47 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:20.210 11:29:47 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:22:20.210 11:29:47 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:20.210 11:29:47 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:20.210 11:29:47 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:20.474 11:29:47 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:20.474 [ 00:22:20.474 { 00:22:20.474 "name": "ftl0", 00:22:20.474 "aliases": [ 00:22:20.474 "711b0430-db0c-4628-bb81-82ae83ab3f72" 00:22:20.474 ], 00:22:20.474 "product_name": "FTL disk", 00:22:20.474 "block_size": 4096, 00:22:20.474 "num_blocks": 20971520, 00:22:20.474 "uuid": "711b0430-db0c-4628-bb81-82ae83ab3f72", 00:22:20.474 "assigned_rate_limits": { 00:22:20.474 "rw_ios_per_sec": 0, 00:22:20.474 "rw_mbytes_per_sec": 0, 00:22:20.474 "r_mbytes_per_sec": 0, 00:22:20.474 "w_mbytes_per_sec": 0 00:22:20.474 }, 00:22:20.474 "claimed": false, 00:22:20.474 "zoned": false, 00:22:20.474 "supported_io_types": { 00:22:20.474 "read": true, 00:22:20.474 "write": true, 00:22:20.474 "unmap": true, 00:22:20.474 "flush": true, 00:22:20.474 "reset": false, 00:22:20.474 "nvme_admin": false, 00:22:20.474 "nvme_io": false, 00:22:20.474 "nvme_io_md": false, 00:22:20.474 "write_zeroes": true, 00:22:20.474 "zcopy": false, 00:22:20.474 "get_zone_info": false, 00:22:20.474 "zone_management": false, 00:22:20.474 "zone_append": false, 00:22:20.474 "compare": false, 00:22:20.474 "compare_and_write": false, 00:22:20.474 "abort": false, 00:22:20.474 "seek_hole": false, 00:22:20.474 "seek_data": false, 00:22:20.474 "copy": false, 00:22:20.474 "nvme_iov_md": false 00:22:20.474 }, 00:22:20.474 "driver_specific": { 00:22:20.474 "ftl": { 00:22:20.474 "base_bdev": "5c910bc4-2517-4557-9ac0-93f231a01ca4", 00:22:20.474 "cache": "nvc0n1p0" 00:22:20.474 } 00:22:20.474 } 00:22:20.474 } 00:22:20.474 ] 00:22:20.474 11:29:47 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:22:20.474 11:29:47 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:22:20.474 11:29:47 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:20.764 11:29:47 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:22:20.764 11:29:47 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:21.024 [2024-12-10 11:29:47.900784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.024 [2024-12-10 11:29:47.900845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:21.024 [2024-12-10 11:29:47.900861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:21.024 [2024-12-10 11:29:47.900874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.024 [2024-12-10 11:29:47.900941] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:21.024 [2024-12-10 11:29:47.905217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.024 [2024-12-10 11:29:47.905259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:21.024 [2024-12-10 11:29:47.905275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.256 ms 00:22:21.024 [2024-12-10 11:29:47.905286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.024 [2024-12-10 11:29:47.906132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.024 [2024-12-10 11:29:47.906160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:21.024 [2024-12-10 11:29:47.906175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:22:21.024 [2024-12-10 11:29:47.906187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.024 [2024-12-10 11:29:47.908696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.024 [2024-12-10 11:29:47.908724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:21.024 [2024-12-10 11:29:47.908744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.465 ms 00:22:21.024 [2024-12-10 11:29:47.908755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.024 [2024-12-10 11:29:47.913976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.024 [2024-12-10 11:29:47.914011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:21.024 [2024-12-10 11:29:47.914026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.174 ms 00:22:21.024 [2024-12-10 11:29:47.914037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.024 [2024-12-10 11:29:47.950020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.024 [2024-12-10 11:29:47.950062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:21.024 [2024-12-10 11:29:47.950098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.897 ms 00:22:21.024 [2024-12-10 11:29:47.950109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.024 [2024-12-10 11:29:47.972434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.024 [2024-12-10 11:29:47.972474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:21.024 [2024-12-10 11:29:47.972514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.292 ms 00:22:21.024 [2024-12-10 11:29:47.972525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.024 [2024-12-10 11:29:47.972839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.024 [2024-12-10 11:29:47.972858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:21.024 [2024-12-10 11:29:47.972872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:22:21.024 [2024-12-10 11:29:47.972882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.024 [2024-12-10 11:29:48.009130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.024 [2024-12-10 11:29:48.009180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:21.024 [2024-12-10 11:29:48.009198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.258 ms 00:22:21.024 [2024-12-10 11:29:48.009208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.024 [2024-12-10 11:29:48.044783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.024 [2024-12-10 11:29:48.044818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:21.024 [2024-12-10 11:29:48.044861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.565 ms 00:22:21.024 [2024-12-10 11:29:48.044870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.024 [2024-12-10 11:29:48.079588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.024 [2024-12-10 11:29:48.079625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:21.024 [2024-12-10 11:29:48.079642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.695 ms 00:22:21.024 [2024-12-10 11:29:48.079651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.024 [2024-12-10 11:29:48.114678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.025 [2024-12-10 11:29:48.114716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:21.025 [2024-12-10 11:29:48.114749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.923 ms 00:22:21.025 [2024-12-10 11:29:48.114758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.025 [2024-12-10 11:29:48.114824] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:21.025 [2024-12-10 11:29:48.114840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.114855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.114867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.114881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.114892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.114905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.114931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.114948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.114959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.114972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.114983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.114996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:21.025 [2024-12-10 11:29:48.115977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:21.026 [2024-12-10 11:29:48.115988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:21.026 [2024-12-10 11:29:48.116001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:21.026 [2024-12-10 11:29:48.116013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:21.026 [2024-12-10 11:29:48.116025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:21.026 [2024-12-10 11:29:48.116036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:21.026 [2024-12-10 11:29:48.116048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:21.026 [2024-12-10 11:29:48.116060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:21.026 [2024-12-10 11:29:48.116073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:21.026 [2024-12-10 11:29:48.116084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:21.026 [2024-12-10 11:29:48.116098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:21.026 [2024-12-10 11:29:48.116108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:21.026 [2024-12-10 11:29:48.116122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:21.026 [2024-12-10 11:29:48.116142] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:21.026 [2024-12-10 11:29:48.116155] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 711b0430-db0c-4628-bb81-82ae83ab3f72 00:22:21.026 [2024-12-10 11:29:48.116166] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:21.026 [2024-12-10 11:29:48.116180] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:21.026 [2024-12-10 11:29:48.116190] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:21.026 [2024-12-10 11:29:48.116206] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:21.026 [2024-12-10 11:29:48.116216] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:21.026 [2024-12-10 11:29:48.116229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:21.026 [2024-12-10 11:29:48.116239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:21.026 [2024-12-10 11:29:48.116250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:21.026 [2024-12-10 11:29:48.116259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:21.026 [2024-12-10 11:29:48.116272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.026 [2024-12-10 11:29:48.116282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:21.026 [2024-12-10 11:29:48.116295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.453 ms 00:22:21.026 [2024-12-10 11:29:48.116305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.026 [2024-12-10 11:29:48.135582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.026 [2024-12-10 11:29:48.135622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:21.026 [2024-12-10 11:29:48.135663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.208 ms 00:22:21.026 [2024-12-10 11:29:48.135674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.285 [2024-12-10 11:29:48.136247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.285 [2024-12-10 11:29:48.136267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:21.285 [2024-12-10 11:29:48.136280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:22:21.285 [2024-12-10 11:29:48.136291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.285 [2024-12-10 11:29:48.204274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.285 [2024-12-10 11:29:48.204312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:21.285 [2024-12-10 11:29:48.204327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.285 [2024-12-10 11:29:48.204338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.285 [2024-12-10 11:29:48.204426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.285 [2024-12-10 11:29:48.204438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:21.285 [2024-12-10 11:29:48.204451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.285 [2024-12-10 11:29:48.204461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.285 [2024-12-10 11:29:48.204570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.285 [2024-12-10 11:29:48.204604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:21.285 [2024-12-10 11:29:48.204619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.285 [2024-12-10 11:29:48.204629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.285 [2024-12-10 11:29:48.204673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.285 [2024-12-10 11:29:48.204684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:21.285 [2024-12-10 11:29:48.204697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.285 [2024-12-10 11:29:48.204708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.285 [2024-12-10 11:29:48.330842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.285 [2024-12-10 11:29:48.330899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:21.285 [2024-12-10 11:29:48.330924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.285 [2024-12-10 11:29:48.330936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.545 [2024-12-10 11:29:48.427010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.545 [2024-12-10 11:29:48.427061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:21.545 [2024-12-10 11:29:48.427095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.545 [2024-12-10 11:29:48.427106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.545 [2024-12-10 11:29:48.427247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.545 [2024-12-10 11:29:48.427261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:21.545 [2024-12-10 11:29:48.427278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.545 [2024-12-10 11:29:48.427288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.545 [2024-12-10 11:29:48.427404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.545 [2024-12-10 11:29:48.427417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:21.545 [2024-12-10 11:29:48.427431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.545 [2024-12-10 11:29:48.427441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.545 [2024-12-10 11:29:48.427606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.545 [2024-12-10 11:29:48.427621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:21.545 [2024-12-10 11:29:48.427635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.545 [2024-12-10 11:29:48.427648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.545 [2024-12-10 11:29:48.427720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.545 [2024-12-10 11:29:48.427733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:21.545 [2024-12-10 11:29:48.427747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.545 [2024-12-10 11:29:48.427757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.545 [2024-12-10 11:29:48.427822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.545 [2024-12-10 11:29:48.427834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:21.545 [2024-12-10 11:29:48.427847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.545 [2024-12-10 11:29:48.427861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.545 [2024-12-10 11:29:48.427948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.545 [2024-12-10 11:29:48.427962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:21.545 [2024-12-10 11:29:48.427976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.545 [2024-12-10 11:29:48.427986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.545 [2024-12-10 11:29:48.428232] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 528.261 ms, result 0 00:22:21.545 true 00:22:21.545 11:29:48 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76902 00:22:21.545 11:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76902 ']' 00:22:21.545 11:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76902 00:22:21.545 11:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:22:21.545 11:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.545 11:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76902 00:22:21.545 11:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.545 11:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.545 killing process with pid 76902 00:22:21.545 11:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76902' 00:22:21.545 11:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76902 00:22:21.545 11:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76902 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:26.820 11:29:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:26.820 11:29:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:26.820 11:29:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:26.820 11:29:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:26.820 11:29:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:26.820 11:29:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:26.820 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:22:26.820 fio-3.35 00:22:26.820 Starting 1 thread 00:22:32.096 00:22:32.096 test: (groupid=0, jobs=1): err= 0: pid=77119: Tue Dec 10 11:29:58 2024 00:22:32.096 read: IOPS=882, BW=58.6MiB/s (61.4MB/s)(255MiB/4344msec) 00:22:32.096 slat (usec): min=4, max=481, avg=13.04, stdev= 8.43 00:22:32.096 clat (usec): min=331, max=943, avg=512.44, stdev=52.54 00:22:32.096 lat (usec): min=346, max=1010, avg=525.48, stdev=53.62 00:22:32.096 clat percentiles (usec): 00:22:32.096 | 1.00th=[ 400], 5.00th=[ 420], 10.00th=[ 461], 20.00th=[ 478], 00:22:32.096 | 30.00th=[ 482], 40.00th=[ 490], 50.00th=[ 506], 60.00th=[ 537], 00:22:32.096 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 570], 95.00th=[ 578], 00:22:32.096 | 99.00th=[ 635], 99.50th=[ 685], 99.90th=[ 816], 99.95th=[ 930], 00:22:32.096 | 99.99th=[ 947] 00:22:32.096 write: IOPS=888, BW=59.0MiB/s (61.9MB/s)(256MiB/4339msec); 0 zone resets 00:22:32.096 slat (nsec): min=15796, max=85969, avg=24814.21, stdev=5446.70 00:22:32.096 clat (usec): min=380, max=1093, avg=568.03, stdev=68.01 00:22:32.096 lat (usec): min=420, max=1116, avg=592.85, stdev=68.15 00:22:32.096 clat percentiles (usec): 00:22:32.096 | 1.00th=[ 441], 5.00th=[ 490], 10.00th=[ 498], 20.00th=[ 510], 00:22:32.096 | 30.00th=[ 545], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 578], 00:22:32.096 | 70.00th=[ 586], 80.00th=[ 594], 90.00th=[ 627], 95.00th=[ 652], 00:22:32.096 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 1012], 99.95th=[ 1057], 00:22:32.096 | 99.99th=[ 1090] 00:22:32.096 bw ( KiB/s): min=59160, max=62016, per=100.00%, avg=60435.00, stdev=936.43, samples=8 00:22:32.096 iops : min= 870, max= 912, avg=888.75, stdev=13.77, samples=8 00:22:32.096 lat (usec) : 500=29.87%, 750=68.92%, 1000=1.13% 00:22:32.096 lat (msec) : 2=0.08% 00:22:32.096 cpu : usr=99.17%, sys=0.07%, ctx=6, majf=0, minf=1167 00:22:32.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:32.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.097 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:32.097 00:22:32.097 Run status group 0 (all jobs): 00:22:32.097 READ: bw=58.6MiB/s (61.4MB/s), 58.6MiB/s-58.6MiB/s (61.4MB/s-61.4MB/s), io=255MiB (267MB), run=4344-4344msec 00:22:32.097 WRITE: bw=59.0MiB/s (61.9MB/s), 59.0MiB/s-59.0MiB/s (61.9MB/s-61.9MB/s), io=256MiB (269MB), run=4339-4339msec 00:22:34.003 ----------------------------------------------------- 00:22:34.003 Suppressions used: 00:22:34.003 count bytes template 00:22:34.003 1 5 /usr/src/fio/parse.c 00:22:34.003 1 8 libtcmalloc_minimal.so 00:22:34.003 1 904 libcrypto.so 00:22:34.003 ----------------------------------------------------- 00:22:34.003 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:34.003 11:30:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:34.263 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:34.263 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:34.263 fio-3.35 00:22:34.263 Starting 2 threads 00:23:06.348 00:23:06.348 first_half: (groupid=0, jobs=1): err= 0: pid=77228: Tue Dec 10 11:30:30 2024 00:23:06.348 read: IOPS=2344, BW=9378KiB/s (9603kB/s)(255MiB/27826msec) 00:23:06.348 slat (nsec): min=3446, max=49556, avg=8733.32, stdev=4310.92 00:23:06.348 clat (usec): min=1128, max=245210, avg=43387.96, stdev=22933.67 00:23:06.348 lat (usec): min=1136, max=245227, avg=43396.70, stdev=22934.51 00:23:06.348 clat percentiles (msec): 00:23:06.348 | 1.00th=[ 17], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 38], 00:23:06.348 | 30.00th=[ 38], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 39], 00:23:06.348 | 70.00th=[ 39], 80.00th=[ 41], 90.00th=[ 46], 95.00th=[ 72], 00:23:06.348 | 99.00th=[ 176], 99.50th=[ 192], 99.90th=[ 226], 99.95th=[ 232], 00:23:06.348 | 99.99th=[ 243] 00:23:06.348 write: IOPS=3111, BW=12.2MiB/s (12.7MB/s)(256MiB/21060msec); 0 zone resets 00:23:06.348 slat (usec): min=3, max=823, avg=10.05, stdev= 7.51 00:23:06.348 clat (usec): min=482, max=101953, avg=11117.04, stdev=19501.50 00:23:06.348 lat (usec): min=497, max=101960, avg=11127.09, stdev=19501.76 00:23:06.348 clat percentiles (usec): 00:23:06.348 | 1.00th=[ 1205], 5.00th=[ 1565], 10.00th=[ 1778], 20.00th=[ 2089], 00:23:06.348 | 30.00th=[ 2704], 40.00th=[ 4293], 50.00th=[ 5735], 60.00th=[ 6915], 00:23:06.348 | 70.00th=[ 7832], 80.00th=[ 12387], 90.00th=[ 16057], 95.00th=[ 81265], 00:23:06.348 | 99.00th=[ 91751], 99.50th=[ 93848], 99.90th=[ 98042], 99.95th=[ 99091], 00:23:06.348 | 99.99th=[100140] 00:23:06.348 bw ( KiB/s): min= 360, max=43088, per=100.00%, avg=21845.33, stdev=13730.36, samples=24 00:23:06.348 iops : min= 90, max=10772, avg=5461.33, stdev=3432.59, samples=24 00:23:06.348 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.11% 00:23:06.348 lat (msec) : 2=8.65%, 4=10.38%, 10=19.99%, 20=7.98%, 50=46.29% 00:23:06.348 lat (msec) : 100=4.93%, 250=1.65% 00:23:06.348 cpu : usr=99.19%, sys=0.17%, ctx=113, majf=0, minf=5587 00:23:06.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:06.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.348 complete : 0=0.0%, 4=99.6%, 8=0.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:06.348 issued rwts: total=65238,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:06.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:06.348 second_half: (groupid=0, jobs=1): err= 0: pid=77229: Tue Dec 10 11:30:30 2024 00:23:06.348 read: IOPS=2330, BW=9321KiB/s (9544kB/s)(255MiB/28055msec) 00:23:06.348 slat (nsec): min=3532, max=44382, avg=10341.59, stdev=3829.89 00:23:06.348 clat (usec): min=1156, max=319873, avg=42059.21, stdev=22933.94 00:23:06.348 lat (usec): min=1173, max=319882, avg=42069.55, stdev=22934.46 00:23:06.348 clat percentiles (msec): 00:23:06.348 | 1.00th=[ 12], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 38], 00:23:06.348 | 30.00th=[ 38], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 39], 00:23:06.348 | 70.00th=[ 39], 80.00th=[ 40], 90.00th=[ 46], 95.00th=[ 61], 00:23:06.348 | 99.00th=[ 176], 99.50th=[ 190], 99.90th=[ 236], 99.95th=[ 275], 00:23:06.348 | 99.99th=[ 313] 00:23:06.348 write: IOPS=2434, BW=9738KiB/s (9971kB/s)(256MiB/26921msec); 0 zone resets 00:23:06.348 slat (usec): min=4, max=721, avg=10.71, stdev= 6.60 00:23:06.348 clat (usec): min=560, max=101599, avg=12807.99, stdev=20543.65 00:23:06.348 lat (usec): min=571, max=101614, avg=12818.69, stdev=20544.29 00:23:06.348 clat percentiles (usec): 00:23:06.348 | 1.00th=[ 1123], 5.00th=[ 1500], 10.00th=[ 1795], 20.00th=[ 2212], 00:23:06.348 | 30.00th=[ 3687], 40.00th=[ 5735], 50.00th=[ 7111], 60.00th=[ 8029], 00:23:06.348 | 70.00th=[ 9241], 80.00th=[ 12780], 90.00th=[ 25560], 95.00th=[ 83362], 00:23:06.348 | 99.00th=[ 93848], 99.50th=[ 94897], 99.90th=[ 98042], 99.95th=[ 99091], 00:23:06.348 | 99.99th=[101188] 00:23:06.348 bw ( KiB/s): min= 704, max=46568, per=96.14%, avg=18724.57, stdev=12677.35, samples=28 00:23:06.348 iops : min= 176, max=11642, avg=4681.14, stdev=3169.34, samples=28 00:23:06.348 lat (usec) : 750=0.03%, 1000=0.21% 00:23:06.348 lat (msec) : 2=7.36%, 4=8.05%, 10=21.42%, 20=9.71%, 50=47.35% 00:23:06.348 lat (msec) : 100=4.36%, 250=1.46%, 500=0.04% 00:23:06.349 cpu : usr=99.19%, sys=0.21%, ctx=44, majf=0, minf=5512 00:23:06.349 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:06.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.349 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:06.349 issued rwts: total=65372,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:06.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:06.349 00:23:06.349 Run status group 0 (all jobs): 00:23:06.349 READ: bw=18.2MiB/s (19.1MB/s), 9321KiB/s-9378KiB/s (9544kB/s-9603kB/s), io=510MiB (535MB), run=27826-28055msec 00:23:06.349 WRITE: bw=19.0MiB/s (19.9MB/s), 9738KiB/s-12.2MiB/s (9971kB/s-12.7MB/s), io=512MiB (537MB), run=21060-26921msec 00:23:06.349 ----------------------------------------------------- 00:23:06.349 Suppressions used: 00:23:06.349 count bytes template 00:23:06.349 2 10 /usr/src/fio/parse.c 00:23:06.349 1 96 /usr/src/fio/iolog.c 00:23:06.349 1 8 libtcmalloc_minimal.so 00:23:06.349 1 904 libcrypto.so 00:23:06.349 ----------------------------------------------------- 00:23:06.349 00:23:06.349 11:30:32 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:23:06.349 11:30:32 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.349 11:30:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:06.349 11:30:32 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:06.349 11:30:32 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:23:06.349 11:30:32 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:06.349 11:30:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:06.349 11:30:32 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:06.349 11:30:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:06.349 11:30:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:06.349 11:30:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:06.349 11:30:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:06.349 11:30:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:06.349 11:30:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:06.349 11:30:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:06.349 11:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:06.349 11:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:06.349 11:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:06.349 11:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:06.349 11:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:06.349 11:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:06.349 11:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:06.349 11:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:06.349 11:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:06.349 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:06.349 fio-3.35 00:23:06.349 Starting 1 thread 00:23:24.438 00:23:24.438 test: (groupid=0, jobs=1): err= 0: pid=77587: Tue Dec 10 11:30:51 2024 00:23:24.438 read: IOPS=6376, BW=24.9MiB/s (26.1MB/s)(255MiB/10225msec) 00:23:24.438 slat (nsec): min=3409, max=66393, avg=8176.09, stdev=3468.20 00:23:24.438 clat (usec): min=715, max=38902, avg=20060.09, stdev=907.53 00:23:24.438 lat (usec): min=719, max=38914, avg=20068.27, stdev=907.35 00:23:24.438 clat percentiles (usec): 00:23:24.438 | 1.00th=[19268], 5.00th=[19268], 10.00th=[19530], 20.00th=[19792], 00:23:24.438 | 30.00th=[19792], 40.00th=[19792], 50.00th=[20055], 60.00th=[20055], 00:23:24.438 | 70.00th=[20317], 80.00th=[20317], 90.00th=[20579], 95.00th=[20841], 00:23:24.438 | 99.00th=[23725], 99.50th=[24249], 99.90th=[28705], 99.95th=[33817], 00:23:24.438 | 99.99th=[38011] 00:23:24.438 write: IOPS=10.6k, BW=41.3MiB/s (43.3MB/s)(256MiB/6204msec); 0 zone resets 00:23:24.438 slat (usec): min=4, max=1600, avg= 9.25, stdev=11.54 00:23:24.438 clat (usec): min=688, max=64994, avg=12057.94, stdev=14505.51 00:23:24.438 lat (usec): min=699, max=65004, avg=12067.19, stdev=14505.53 00:23:24.438 clat percentiles (usec): 00:23:24.438 | 1.00th=[ 1205], 5.00th=[ 1450], 10.00th=[ 1647], 20.00th=[ 1876], 00:23:24.438 | 30.00th=[ 2073], 40.00th=[ 2474], 50.00th=[ 7898], 60.00th=[ 9503], 00:23:24.438 | 70.00th=[10945], 80.00th=[13042], 90.00th=[43779], 95.00th=[45351], 00:23:24.438 | 99.00th=[47449], 99.50th=[47973], 99.90th=[51119], 99.95th=[52691], 00:23:24.438 | 99.99th=[60556] 00:23:24.438 bw ( KiB/s): min=14576, max=54256, per=95.44%, avg=40329.85, stdev=9845.50, samples=13 00:23:24.438 iops : min= 3644, max=13564, avg=10082.46, stdev=2461.37, samples=13 00:23:24.438 lat (usec) : 750=0.01%, 1000=0.08% 00:23:24.438 lat (msec) : 2=13.12%, 4=7.73%, 10=11.05%, 20=35.95%, 50=31.99% 00:23:24.438 lat (msec) : 100=0.07% 00:23:24.438 cpu : usr=98.98%, sys=0.30%, ctx=25, majf=0, minf=5563 00:23:24.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:24.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.439 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:24.439 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:24.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:24.439 00:23:24.439 Run status group 0 (all jobs): 00:23:24.439 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=255MiB (267MB), run=10225-10225msec 00:23:24.439 WRITE: bw=41.3MiB/s (43.3MB/s), 41.3MiB/s-41.3MiB/s (43.3MB/s-43.3MB/s), io=256MiB (268MB), run=6204-6204msec 00:23:26.343 ----------------------------------------------------- 00:23:26.343 Suppressions used: 00:23:26.343 count bytes template 00:23:26.343 1 5 /usr/src/fio/parse.c 00:23:26.343 2 192 /usr/src/fio/iolog.c 00:23:26.343 1 8 libtcmalloc_minimal.so 00:23:26.343 1 904 libcrypto.so 00:23:26.343 ----------------------------------------------------- 00:23:26.343 00:23:26.343 11:30:53 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:23:26.343 11:30:53 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:26.343 11:30:53 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:26.343 11:30:53 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:26.343 11:30:53 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:23:26.343 Remove shared memory files 00:23:26.343 11:30:53 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:26.343 11:30:53 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:23:26.343 11:30:53 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:23:26.343 11:30:53 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57862 /dev/shm/spdk_tgt_trace.pid75798 00:23:26.343 11:30:53 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:26.343 11:30:53 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:23:26.343 00:23:26.343 real 1m14.861s 00:23:26.343 user 2m43.546s 00:23:26.343 sys 0m4.023s 00:23:26.343 11:30:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.343 11:30:53 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:26.343 ************************************ 00:23:26.343 END TEST ftl_fio_basic 00:23:26.343 ************************************ 00:23:26.343 11:30:53 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:23:26.343 11:30:53 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:26.343 11:30:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:26.343 11:30:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:26.343 ************************************ 00:23:26.343 START TEST ftl_bdevperf 00:23:26.343 ************************************ 00:23:26.343 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:23:26.603 * Looking for test storage... 00:23:26.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:26.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.603 --rc genhtml_branch_coverage=1 00:23:26.603 --rc genhtml_function_coverage=1 00:23:26.603 --rc genhtml_legend=1 00:23:26.603 --rc geninfo_all_blocks=1 00:23:26.603 --rc geninfo_unexecuted_blocks=1 00:23:26.603 00:23:26.603 ' 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:26.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.603 --rc genhtml_branch_coverage=1 00:23:26.603 --rc genhtml_function_coverage=1 00:23:26.603 --rc genhtml_legend=1 00:23:26.603 --rc geninfo_all_blocks=1 00:23:26.603 --rc geninfo_unexecuted_blocks=1 00:23:26.603 00:23:26.603 ' 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:26.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.603 --rc genhtml_branch_coverage=1 00:23:26.603 --rc genhtml_function_coverage=1 00:23:26.603 --rc genhtml_legend=1 00:23:26.603 --rc geninfo_all_blocks=1 00:23:26.603 --rc geninfo_unexecuted_blocks=1 00:23:26.603 00:23:26.603 ' 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:26.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.603 --rc genhtml_branch_coverage=1 00:23:26.603 --rc genhtml_function_coverage=1 00:23:26.603 --rc genhtml_legend=1 00:23:26.603 --rc geninfo_all_blocks=1 00:23:26.603 --rc geninfo_unexecuted_blocks=1 00:23:26.603 00:23:26.603 ' 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77859 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:23:26.603 11:30:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77859 00:23:26.604 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77859 ']' 00:23:26.604 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.604 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.604 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.604 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.604 11:30:53 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:26.604 [2024-12-10 11:30:53.694158] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:26.604 [2024-12-10 11:30:53.694269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77859 ] 00:23:26.862 [2024-12-10 11:30:53.876756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.121 [2024-12-10 11:30:53.983832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:27.717 11:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:27.976 11:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:27.977 { 00:23:27.977 "name": "nvme0n1", 00:23:27.977 "aliases": [ 00:23:27.977 "f1245f54-e74f-4fd0-ae97-1d18ab0b9a2e" 00:23:27.977 ], 00:23:27.977 "product_name": "NVMe disk", 00:23:27.977 "block_size": 4096, 00:23:27.977 "num_blocks": 1310720, 00:23:27.977 "uuid": "f1245f54-e74f-4fd0-ae97-1d18ab0b9a2e", 00:23:27.977 "numa_id": -1, 00:23:27.977 "assigned_rate_limits": { 00:23:27.977 "rw_ios_per_sec": 0, 00:23:27.977 "rw_mbytes_per_sec": 0, 00:23:27.977 "r_mbytes_per_sec": 0, 00:23:27.977 "w_mbytes_per_sec": 0 00:23:27.977 }, 00:23:27.977 "claimed": true, 00:23:27.977 "claim_type": "read_many_write_one", 00:23:27.977 "zoned": false, 00:23:27.977 "supported_io_types": { 00:23:27.977 "read": true, 00:23:27.977 "write": true, 00:23:27.977 "unmap": true, 00:23:27.977 "flush": true, 00:23:27.977 "reset": true, 00:23:27.977 "nvme_admin": true, 00:23:27.977 "nvme_io": true, 00:23:27.977 "nvme_io_md": false, 00:23:27.977 "write_zeroes": true, 00:23:27.977 "zcopy": false, 00:23:27.977 "get_zone_info": false, 00:23:27.977 "zone_management": false, 00:23:27.977 "zone_append": false, 00:23:27.977 "compare": true, 00:23:27.977 "compare_and_write": false, 00:23:27.977 "abort": true, 00:23:27.977 "seek_hole": false, 00:23:27.977 "seek_data": false, 00:23:27.977 "copy": true, 00:23:27.977 "nvme_iov_md": false 00:23:27.977 }, 00:23:27.977 "driver_specific": { 00:23:27.977 "nvme": [ 00:23:27.977 { 00:23:27.977 "pci_address": "0000:00:11.0", 00:23:27.977 "trid": { 00:23:27.977 "trtype": "PCIe", 00:23:27.977 "traddr": "0000:00:11.0" 00:23:27.977 }, 00:23:27.977 "ctrlr_data": { 00:23:27.977 "cntlid": 0, 00:23:27.977 "vendor_id": "0x1b36", 00:23:27.977 "model_number": "QEMU NVMe Ctrl", 00:23:27.977 "serial_number": "12341", 00:23:27.977 "firmware_revision": "8.0.0", 00:23:27.977 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:27.977 "oacs": { 00:23:27.977 "security": 0, 00:23:27.977 "format": 1, 00:23:27.977 "firmware": 0, 00:23:27.977 "ns_manage": 1 00:23:27.977 }, 00:23:27.977 "multi_ctrlr": false, 00:23:27.977 "ana_reporting": false 00:23:27.977 }, 00:23:27.977 "vs": { 00:23:27.977 "nvme_version": "1.4" 00:23:27.977 }, 00:23:27.977 "ns_data": { 00:23:27.977 "id": 1, 00:23:27.977 "can_share": false 00:23:27.977 } 00:23:27.977 } 00:23:27.977 ], 00:23:27.977 "mp_policy": "active_passive" 00:23:27.977 } 00:23:27.977 } 00:23:27.977 ]' 00:23:27.977 11:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:28.235 11:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:28.235 11:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:28.235 11:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:28.235 11:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:28.235 11:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:23:28.235 11:30:55 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:23:28.235 11:30:55 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:28.235 11:30:55 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:23:28.235 11:30:55 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:28.235 11:30:55 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:28.494 11:30:55 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=419abc13-2818-4e2a-b2e7-e018d2d11dca 00:23:28.494 11:30:55 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:23:28.494 11:30:55 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 419abc13-2818-4e2a-b2e7-e018d2d11dca 00:23:28.494 11:30:55 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:28.753 11:30:55 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=aecda1d6-c3d7-412a-9c87-97299120f55c 00:23:28.753 11:30:55 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u aecda1d6-c3d7-412a-9c87-97299120f55c 00:23:29.012 11:30:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a 00:23:29.012 11:30:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a 00:23:29.012 11:30:56 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:23:29.012 11:30:56 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:29.012 11:30:56 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a 00:23:29.012 11:30:56 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:23:29.012 11:30:56 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a 00:23:29.012 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a 00:23:29.012 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:29.012 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:29.012 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:29.012 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a 00:23:29.271 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:29.271 { 00:23:29.271 "name": "fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a", 00:23:29.271 "aliases": [ 00:23:29.271 "lvs/nvme0n1p0" 00:23:29.271 ], 00:23:29.271 "product_name": "Logical Volume", 00:23:29.271 "block_size": 4096, 00:23:29.271 "num_blocks": 26476544, 00:23:29.271 "uuid": "fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a", 00:23:29.271 "assigned_rate_limits": { 00:23:29.271 "rw_ios_per_sec": 0, 00:23:29.271 "rw_mbytes_per_sec": 0, 00:23:29.271 "r_mbytes_per_sec": 0, 00:23:29.271 "w_mbytes_per_sec": 0 00:23:29.271 }, 00:23:29.271 "claimed": false, 00:23:29.271 "zoned": false, 00:23:29.271 "supported_io_types": { 00:23:29.271 "read": true, 00:23:29.271 "write": true, 00:23:29.271 "unmap": true, 00:23:29.271 "flush": false, 00:23:29.271 "reset": true, 00:23:29.271 "nvme_admin": false, 00:23:29.271 "nvme_io": false, 00:23:29.271 "nvme_io_md": false, 00:23:29.271 "write_zeroes": true, 00:23:29.271 "zcopy": false, 00:23:29.272 "get_zone_info": false, 00:23:29.272 "zone_management": false, 00:23:29.272 "zone_append": false, 00:23:29.272 "compare": false, 00:23:29.272 "compare_and_write": false, 00:23:29.272 "abort": false, 00:23:29.272 "seek_hole": true, 00:23:29.272 "seek_data": true, 00:23:29.272 "copy": false, 00:23:29.272 "nvme_iov_md": false 00:23:29.272 }, 00:23:29.272 "driver_specific": { 00:23:29.272 "lvol": { 00:23:29.272 "lvol_store_uuid": "aecda1d6-c3d7-412a-9c87-97299120f55c", 00:23:29.272 "base_bdev": "nvme0n1", 00:23:29.272 "thin_provision": true, 00:23:29.272 "num_allocated_clusters": 0, 00:23:29.272 "snapshot": false, 00:23:29.272 "clone": false, 00:23:29.272 "esnap_clone": false 00:23:29.272 } 00:23:29.272 } 00:23:29.272 } 00:23:29.272 ]' 00:23:29.272 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:29.272 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:29.272 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:29.272 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:29.272 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:29.272 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:23:29.272 11:30:56 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:23:29.272 11:30:56 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:23:29.272 11:30:56 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:29.531 11:30:56 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:29.531 11:30:56 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:29.531 11:30:56 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a 00:23:29.531 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a 00:23:29.531 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:29.531 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:29.531 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:29.531 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a 00:23:29.789 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:29.789 { 00:23:29.789 "name": "fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a", 00:23:29.789 "aliases": [ 00:23:29.789 "lvs/nvme0n1p0" 00:23:29.789 ], 00:23:29.790 "product_name": "Logical Volume", 00:23:29.790 "block_size": 4096, 00:23:29.790 "num_blocks": 26476544, 00:23:29.790 "uuid": "fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a", 00:23:29.790 "assigned_rate_limits": { 00:23:29.790 "rw_ios_per_sec": 0, 00:23:29.790 "rw_mbytes_per_sec": 0, 00:23:29.790 "r_mbytes_per_sec": 0, 00:23:29.790 "w_mbytes_per_sec": 0 00:23:29.790 }, 00:23:29.790 "claimed": false, 00:23:29.790 "zoned": false, 00:23:29.790 "supported_io_types": { 00:23:29.790 "read": true, 00:23:29.790 "write": true, 00:23:29.790 "unmap": true, 00:23:29.790 "flush": false, 00:23:29.790 "reset": true, 00:23:29.790 "nvme_admin": false, 00:23:29.790 "nvme_io": false, 00:23:29.790 "nvme_io_md": false, 00:23:29.790 "write_zeroes": true, 00:23:29.790 "zcopy": false, 00:23:29.790 "get_zone_info": false, 00:23:29.790 "zone_management": false, 00:23:29.790 "zone_append": false, 00:23:29.790 "compare": false, 00:23:29.790 "compare_and_write": false, 00:23:29.790 "abort": false, 00:23:29.790 "seek_hole": true, 00:23:29.790 "seek_data": true, 00:23:29.790 "copy": false, 00:23:29.790 "nvme_iov_md": false 00:23:29.790 }, 00:23:29.790 "driver_specific": { 00:23:29.790 "lvol": { 00:23:29.790 "lvol_store_uuid": "aecda1d6-c3d7-412a-9c87-97299120f55c", 00:23:29.790 "base_bdev": "nvme0n1", 00:23:29.790 "thin_provision": true, 00:23:29.790 "num_allocated_clusters": 0, 00:23:29.790 "snapshot": false, 00:23:29.790 "clone": false, 00:23:29.790 "esnap_clone": false 00:23:29.790 } 00:23:29.790 } 00:23:29.790 } 00:23:29.790 ]' 00:23:29.790 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:29.790 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:29.790 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:29.790 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:29.790 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:29.790 11:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:23:29.790 11:30:56 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:23:29.790 11:30:56 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:30.049 11:30:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:23:30.049 11:30:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a 00:23:30.049 11:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a 00:23:30.049 11:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:30.049 11:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:30.049 11:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:30.049 11:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a 00:23:30.307 11:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:30.307 { 00:23:30.307 "name": "fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a", 00:23:30.307 "aliases": [ 00:23:30.307 "lvs/nvme0n1p0" 00:23:30.307 ], 00:23:30.307 "product_name": "Logical Volume", 00:23:30.307 "block_size": 4096, 00:23:30.307 "num_blocks": 26476544, 00:23:30.307 "uuid": "fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a", 00:23:30.307 "assigned_rate_limits": { 00:23:30.307 "rw_ios_per_sec": 0, 00:23:30.307 "rw_mbytes_per_sec": 0, 00:23:30.307 "r_mbytes_per_sec": 0, 00:23:30.307 "w_mbytes_per_sec": 0 00:23:30.307 }, 00:23:30.307 "claimed": false, 00:23:30.307 "zoned": false, 00:23:30.307 "supported_io_types": { 00:23:30.307 "read": true, 00:23:30.307 "write": true, 00:23:30.307 "unmap": true, 00:23:30.307 "flush": false, 00:23:30.307 "reset": true, 00:23:30.307 "nvme_admin": false, 00:23:30.307 "nvme_io": false, 00:23:30.307 "nvme_io_md": false, 00:23:30.307 "write_zeroes": true, 00:23:30.307 "zcopy": false, 00:23:30.307 "get_zone_info": false, 00:23:30.307 "zone_management": false, 00:23:30.307 "zone_append": false, 00:23:30.307 "compare": false, 00:23:30.307 "compare_and_write": false, 00:23:30.307 "abort": false, 00:23:30.307 "seek_hole": true, 00:23:30.307 "seek_data": true, 00:23:30.307 "copy": false, 00:23:30.307 "nvme_iov_md": false 00:23:30.307 }, 00:23:30.307 "driver_specific": { 00:23:30.308 "lvol": { 00:23:30.308 "lvol_store_uuid": "aecda1d6-c3d7-412a-9c87-97299120f55c", 00:23:30.308 "base_bdev": "nvme0n1", 00:23:30.308 "thin_provision": true, 00:23:30.308 "num_allocated_clusters": 0, 00:23:30.308 "snapshot": false, 00:23:30.308 "clone": false, 00:23:30.308 "esnap_clone": false 00:23:30.308 } 00:23:30.308 } 00:23:30.308 } 00:23:30.308 ]' 00:23:30.308 11:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:30.308 11:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:30.308 11:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:30.308 11:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:30.308 11:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:30.308 11:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:23:30.308 11:30:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:23:30.308 11:30:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fb8a89e3-cd20-4bb4-a5a3-25222fbcc99a -c nvc0n1p0 --l2p_dram_limit 20 00:23:30.567 [2024-12-10 11:30:57.602640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.567 [2024-12-10 11:30:57.602719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:30.567 [2024-12-10 11:30:57.602737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:30.567 [2024-12-10 11:30:57.602753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.568 [2024-12-10 11:30:57.602821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.568 [2024-12-10 11:30:57.602840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:30.568 [2024-12-10 11:30:57.602853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:30.568 [2024-12-10 11:30:57.602868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.568 [2024-12-10 11:30:57.602892] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:30.568 [2024-12-10 11:30:57.604026] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:30.568 [2024-12-10 11:30:57.604063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.568 [2024-12-10 11:30:57.604080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:30.568 [2024-12-10 11:30:57.604094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.179 ms 00:23:30.568 [2024-12-10 11:30:57.604110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.568 [2024-12-10 11:30:57.604194] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0fed0ed2-5488-4d63-a538-76c7c2f5bd9d 00:23:30.568 [2024-12-10 11:30:57.605666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.568 [2024-12-10 11:30:57.605709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:30.568 [2024-12-10 11:30:57.605731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:30.568 [2024-12-10 11:30:57.605743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.568 [2024-12-10 11:30:57.613358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.568 [2024-12-10 11:30:57.613408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:30.568 [2024-12-10 11:30:57.613427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.579 ms 00:23:30.568 [2024-12-10 11:30:57.613443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.568 [2024-12-10 11:30:57.613581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.568 [2024-12-10 11:30:57.613599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:30.568 [2024-12-10 11:30:57.613621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:23:30.568 [2024-12-10 11:30:57.613633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.568 [2024-12-10 11:30:57.613706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.568 [2024-12-10 11:30:57.613720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:30.568 [2024-12-10 11:30:57.613737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:30.568 [2024-12-10 11:30:57.613750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.568 [2024-12-10 11:30:57.613784] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:30.568 [2024-12-10 11:30:57.618868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.568 [2024-12-10 11:30:57.618911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:30.568 [2024-12-10 11:30:57.618936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.108 ms 00:23:30.568 [2024-12-10 11:30:57.618956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.568 [2024-12-10 11:30:57.618993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.568 [2024-12-10 11:30:57.619009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:30.568 [2024-12-10 11:30:57.619021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:30.568 [2024-12-10 11:30:57.619036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.568 [2024-12-10 11:30:57.619073] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:30.568 [2024-12-10 11:30:57.619242] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:30.568 [2024-12-10 11:30:57.619261] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:30.568 [2024-12-10 11:30:57.619280] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:30.568 [2024-12-10 11:30:57.619296] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:30.568 [2024-12-10 11:30:57.619314] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:30.568 [2024-12-10 11:30:57.619327] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:30.568 [2024-12-10 11:30:57.619344] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:30.568 [2024-12-10 11:30:57.619357] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:30.568 [2024-12-10 11:30:57.619373] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:30.568 [2024-12-10 11:30:57.619389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.568 [2024-12-10 11:30:57.619404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:30.568 [2024-12-10 11:30:57.619418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:23:30.568 [2024-12-10 11:30:57.619433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.568 [2024-12-10 11:30:57.619511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.568 [2024-12-10 11:30:57.619534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:30.568 [2024-12-10 11:30:57.619547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:30.568 [2024-12-10 11:30:57.619567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.568 [2024-12-10 11:30:57.619650] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:30.568 [2024-12-10 11:30:57.619672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:30.568 [2024-12-10 11:30:57.619685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:30.568 [2024-12-10 11:30:57.619701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.568 [2024-12-10 11:30:57.619714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:30.568 [2024-12-10 11:30:57.619728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:30.568 [2024-12-10 11:30:57.619741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:30.568 [2024-12-10 11:30:57.619756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:30.568 [2024-12-10 11:30:57.619768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:30.568 [2024-12-10 11:30:57.619782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:30.568 [2024-12-10 11:30:57.619794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:30.568 [2024-12-10 11:30:57.619825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:30.568 [2024-12-10 11:30:57.619838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:30.568 [2024-12-10 11:30:57.619853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:30.568 [2024-12-10 11:30:57.619866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:30.568 [2024-12-10 11:30:57.619886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.568 [2024-12-10 11:30:57.619898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:30.568 [2024-12-10 11:30:57.619925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:30.568 [2024-12-10 11:30:57.619938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.568 [2024-12-10 11:30:57.619953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:30.568 [2024-12-10 11:30:57.619964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:30.568 [2024-12-10 11:30:57.619978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:30.568 [2024-12-10 11:30:57.619991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:30.568 [2024-12-10 11:30:57.620006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:30.568 [2024-12-10 11:30:57.620017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:30.568 [2024-12-10 11:30:57.620031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:30.568 [2024-12-10 11:30:57.620043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:30.568 [2024-12-10 11:30:57.620058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:30.568 [2024-12-10 11:30:57.620070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:30.568 [2024-12-10 11:30:57.620084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:30.568 [2024-12-10 11:30:57.620095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:30.568 [2024-12-10 11:30:57.620111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:30.568 [2024-12-10 11:30:57.620123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:30.568 [2024-12-10 11:30:57.620138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:30.568 [2024-12-10 11:30:57.620151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:30.568 [2024-12-10 11:30:57.620165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:30.568 [2024-12-10 11:30:57.620176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:30.568 [2024-12-10 11:30:57.620191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:30.568 [2024-12-10 11:30:57.620203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:30.568 [2024-12-10 11:30:57.620218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.568 [2024-12-10 11:30:57.620229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:30.568 [2024-12-10 11:30:57.620244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:30.568 [2024-12-10 11:30:57.620255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.569 [2024-12-10 11:30:57.620269] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:30.569 [2024-12-10 11:30:57.620281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:30.569 [2024-12-10 11:30:57.620296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:30.569 [2024-12-10 11:30:57.620308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.569 [2024-12-10 11:30:57.620326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:30.569 [2024-12-10 11:30:57.620339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:30.569 [2024-12-10 11:30:57.620354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:30.569 [2024-12-10 11:30:57.620366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:30.569 [2024-12-10 11:30:57.620380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:30.569 [2024-12-10 11:30:57.620392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:30.569 [2024-12-10 11:30:57.620408] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:30.569 [2024-12-10 11:30:57.620423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:30.569 [2024-12-10 11:30:57.620440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:30.569 [2024-12-10 11:30:57.620453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:30.569 [2024-12-10 11:30:57.620468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:30.569 [2024-12-10 11:30:57.620480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:30.569 [2024-12-10 11:30:57.620495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:30.569 [2024-12-10 11:30:57.620508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:30.569 [2024-12-10 11:30:57.620523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:30.569 [2024-12-10 11:30:57.620536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:30.569 [2024-12-10 11:30:57.620556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:30.569 [2024-12-10 11:30:57.620569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:30.569 [2024-12-10 11:30:57.620584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:30.569 [2024-12-10 11:30:57.620597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:30.569 [2024-12-10 11:30:57.620612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:30.569 [2024-12-10 11:30:57.620624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:30.569 [2024-12-10 11:30:57.620639] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:30.569 [2024-12-10 11:30:57.620653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:30.569 [2024-12-10 11:30:57.620673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:30.569 [2024-12-10 11:30:57.620685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:30.569 [2024-12-10 11:30:57.620700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:30.569 [2024-12-10 11:30:57.620713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:30.569 [2024-12-10 11:30:57.620729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.569 [2024-12-10 11:30:57.620742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:30.569 [2024-12-10 11:30:57.620757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:23:30.569 [2024-12-10 11:30:57.620770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.569 [2024-12-10 11:30:57.620818] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:30.569 [2024-12-10 11:30:57.620833] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:37.135 [2024-12-10 11:31:03.659403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:03.659482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:37.135 [2024-12-10 11:31:03.659516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6048.381 ms 00:23:37.135 [2024-12-10 11:31:03.659529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:03.693951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:03.694008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:37.135 [2024-12-10 11:31:03.694034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.180 ms 00:23:37.135 [2024-12-10 11:31:03.694047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:03.694179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:03.694195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:37.135 [2024-12-10 11:31:03.694220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:37.135 [2024-12-10 11:31:03.694232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:03.771439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:03.771490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:37.135 [2024-12-10 11:31:03.771513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.264 ms 00:23:37.135 [2024-12-10 11:31:03.771526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:03.771582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:03.771595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:37.135 [2024-12-10 11:31:03.771614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:37.135 [2024-12-10 11:31:03.771632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:03.772182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:03.772203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:37.135 [2024-12-10 11:31:03.772223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:23:37.135 [2024-12-10 11:31:03.772235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:03.772355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:03.772371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:37.135 [2024-12-10 11:31:03.772395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:23:37.135 [2024-12-10 11:31:03.772409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:03.793084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:03.793125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:37.135 [2024-12-10 11:31:03.793147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.675 ms 00:23:37.135 [2024-12-10 11:31:03.793177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:03.805027] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:23:37.135 [2024-12-10 11:31:03.810825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:03.810869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:37.135 [2024-12-10 11:31:03.810884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.593 ms 00:23:37.135 [2024-12-10 11:31:03.810899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:03.910506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:03.910787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:37.135 [2024-12-10 11:31:03.910816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.716 ms 00:23:37.135 [2024-12-10 11:31:03.910837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:03.911041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:03.911072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:37.135 [2024-12-10 11:31:03.911087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:23:37.135 [2024-12-10 11:31:03.911113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:03.945934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:03.945991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:37.135 [2024-12-10 11:31:03.946008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.798 ms 00:23:37.135 [2024-12-10 11:31:03.946027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:03.980243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:03.980297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:37.135 [2024-12-10 11:31:03.980315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.227 ms 00:23:37.135 [2024-12-10 11:31:03.980332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:03.980992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:03.981029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:37.135 [2024-12-10 11:31:03.981043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:23:37.135 [2024-12-10 11:31:03.981061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:04.084724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:04.084788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:37.135 [2024-12-10 11:31:04.084805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.771 ms 00:23:37.135 [2024-12-10 11:31:04.084824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:04.121352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:04.121415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:37.135 [2024-12-10 11:31:04.121439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.502 ms 00:23:37.135 [2024-12-10 11:31:04.121458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:04.157381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:04.157444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:37.135 [2024-12-10 11:31:04.157461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.935 ms 00:23:37.135 [2024-12-10 11:31:04.157479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:04.192842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:04.192899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:37.135 [2024-12-10 11:31:04.192929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.376 ms 00:23:37.135 [2024-12-10 11:31:04.192964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:04.193012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:04.193039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:37.135 [2024-12-10 11:31:04.193053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:37.135 [2024-12-10 11:31:04.193071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:04.193181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.135 [2024-12-10 11:31:04.193204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:37.135 [2024-12-10 11:31:04.193218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:37.135 [2024-12-10 11:31:04.193235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.135 [2024-12-10 11:31:04.194381] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 6601.930 ms, result 0 00:23:37.135 { 00:23:37.135 "name": "ftl0", 00:23:37.135 "uuid": "0fed0ed2-5488-4d63-a538-76c7c2f5bd9d" 00:23:37.135 } 00:23:37.135 11:31:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:23:37.135 11:31:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:23:37.135 11:31:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:23:37.393 11:31:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:23:37.393 [2024-12-10 11:31:04.494539] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:37.393 I/O size of 69632 is greater than zero copy threshold (65536). 00:23:37.393 Zero copy mechanism will not be used. 00:23:37.393 Running I/O for 4 seconds... 00:23:39.708 1327.00 IOPS, 88.12 MiB/s [2024-12-10T11:31:07.757Z] 1343.00 IOPS, 89.18 MiB/s [2024-12-10T11:31:08.695Z] 1357.67 IOPS, 90.16 MiB/s [2024-12-10T11:31:08.695Z] 1380.25 IOPS, 91.66 MiB/s 00:23:41.581 Latency(us) 00:23:41.581 [2024-12-10T11:31:08.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.581 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:23:41.581 ftl0 : 4.00 1379.86 91.63 0.00 0.00 757.96 273.07 2184.53 00:23:41.581 [2024-12-10T11:31:08.695Z] =================================================================================================================== 00:23:41.581 [2024-12-10T11:31:08.695Z] Total : 1379.86 91.63 0.00 0.00 757.96 273.07 2184.53 00:23:41.581 [2024-12-10 11:31:08.499085] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:41.581 { 00:23:41.581 "results": [ 00:23:41.581 { 00:23:41.581 "job": "ftl0", 00:23:41.581 "core_mask": "0x1", 00:23:41.581 "workload": "randwrite", 00:23:41.581 "status": "finished", 00:23:41.581 "queue_depth": 1, 00:23:41.581 "io_size": 69632, 00:23:41.581 "runtime": 4.001865, 00:23:41.581 "iops": 1379.8566418407418, 00:23:41.581 "mibps": 91.63110512223676, 00:23:41.581 "io_failed": 0, 00:23:41.581 "io_timeout": 0, 00:23:41.581 "avg_latency_us": 757.9606923165315, 00:23:41.581 "min_latency_us": 273.06666666666666, 00:23:41.581 "max_latency_us": 2184.5333333333333 00:23:41.581 } 00:23:41.581 ], 00:23:41.581 "core_count": 1 00:23:41.581 } 00:23:41.581 11:31:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:23:41.581 [2024-12-10 11:31:08.619278] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:41.581 Running I/O for 4 seconds... 00:23:43.894 11807.00 IOPS, 46.12 MiB/s [2024-12-10T11:31:11.943Z] 11616.00 IOPS, 45.38 MiB/s [2024-12-10T11:31:12.879Z] 11307.33 IOPS, 44.17 MiB/s [2024-12-10T11:31:12.879Z] 11158.50 IOPS, 43.59 MiB/s 00:23:45.765 Latency(us) 00:23:45.765 [2024-12-10T11:31:12.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.765 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:23:45.765 ftl0 : 4.02 11145.23 43.54 0.00 0.00 11460.72 235.23 32215.29 00:23:45.765 [2024-12-10T11:31:12.879Z] =================================================================================================================== 00:23:45.765 [2024-12-10T11:31:12.879Z] Total : 11145.23 43.54 0.00 0.00 11460.72 0.00 32215.29 00:23:45.765 [2024-12-10 11:31:12.637944] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:45.765 { 00:23:45.765 "results": [ 00:23:45.765 { 00:23:45.765 "job": "ftl0", 00:23:45.765 "core_mask": "0x1", 00:23:45.765 "workload": "randwrite", 00:23:45.765 "status": "finished", 00:23:45.765 "queue_depth": 128, 00:23:45.765 "io_size": 4096, 00:23:45.765 "runtime": 4.01589, 00:23:45.765 "iops": 11145.225591338409, 00:23:45.765 "mibps": 43.53603746616566, 00:23:45.765 "io_failed": 0, 00:23:45.765 "io_timeout": 0, 00:23:45.765 "avg_latency_us": 11460.724059327707, 00:23:45.765 "min_latency_us": 235.23212851405623, 00:23:45.765 "max_latency_us": 32215.286746987953 00:23:45.765 } 00:23:45.765 ], 00:23:45.765 "core_count": 1 00:23:45.765 } 00:23:45.765 11:31:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:23:45.765 [2024-12-10 11:31:12.773211] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:45.765 Running I/O for 4 seconds... 00:23:48.080 9248.00 IOPS, 36.12 MiB/s [2024-12-10T11:31:16.130Z] 9391.00 IOPS, 36.68 MiB/s [2024-12-10T11:31:17.066Z] 9497.00 IOPS, 37.10 MiB/s [2024-12-10T11:31:17.066Z] 9497.00 IOPS, 37.10 MiB/s 00:23:49.952 Latency(us) 00:23:49.952 [2024-12-10T11:31:17.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.952 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:49.952 Verification LBA range: start 0x0 length 0x1400000 00:23:49.952 ftl0 : 4.01 9505.98 37.13 0.00 0.00 13423.93 222.07 19792.40 00:23:49.952 [2024-12-10T11:31:17.066Z] =================================================================================================================== 00:23:49.952 [2024-12-10T11:31:17.066Z] Total : 9505.98 37.13 0.00 0.00 13423.93 0.00 19792.40 00:23:49.952 [2024-12-10 11:31:16.794653] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:49.952 { 00:23:49.952 "results": [ 00:23:49.952 { 00:23:49.952 "job": "ftl0", 00:23:49.952 "core_mask": "0x1", 00:23:49.952 "workload": "verify", 00:23:49.952 "status": "finished", 00:23:49.952 "verify_range": { 00:23:49.952 "start": 0, 00:23:49.952 "length": 20971520 00:23:49.952 }, 00:23:49.952 "queue_depth": 128, 00:23:49.952 "io_size": 4096, 00:23:49.952 "runtime": 4.009478, 00:23:49.952 "iops": 9505.97559083751, 00:23:49.952 "mibps": 37.13271715170902, 00:23:49.952 "io_failed": 0, 00:23:49.952 "io_timeout": 0, 00:23:49.952 "avg_latency_us": 13423.929092367793, 00:23:49.952 "min_latency_us": 222.0722891566265, 00:23:49.952 "max_latency_us": 19792.398393574298 00:23:49.952 } 00:23:49.952 ], 00:23:49.952 "core_count": 1 00:23:49.952 } 00:23:49.952 11:31:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:23:49.952 [2024-12-10 11:31:17.000667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.952 [2024-12-10 11:31:17.000849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:49.952 [2024-12-10 11:31:17.000873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:49.952 [2024-12-10 11:31:17.000887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.952 [2024-12-10 11:31:17.000920] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:49.952 [2024-12-10 11:31:17.004948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.952 [2024-12-10 11:31:17.004980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:49.952 [2024-12-10 11:31:17.004995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.996 ms 00:23:49.952 [2024-12-10 11:31:17.005006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.952 [2024-12-10 11:31:17.007109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.952 [2024-12-10 11:31:17.007265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:49.952 [2024-12-10 11:31:17.007298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.082 ms 00:23:49.952 [2024-12-10 11:31:17.007310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.211 [2024-12-10 11:31:17.223946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.211 [2024-12-10 11:31:17.223991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:50.211 [2024-12-10 11:31:17.224013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 216.957 ms 00:23:50.211 [2024-12-10 11:31:17.224024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.211 [2024-12-10 11:31:17.228835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.211 [2024-12-10 11:31:17.228869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:50.211 [2024-12-10 11:31:17.228884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.779 ms 00:23:50.211 [2024-12-10 11:31:17.228897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.211 [2024-12-10 11:31:17.264061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.211 [2024-12-10 11:31:17.264096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:50.211 [2024-12-10 11:31:17.264112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.130 ms 00:23:50.211 [2024-12-10 11:31:17.264121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.211 [2024-12-10 11:31:17.285147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.211 [2024-12-10 11:31:17.285301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:50.211 [2024-12-10 11:31:17.285344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.017 ms 00:23:50.211 [2024-12-10 11:31:17.285355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.211 [2024-12-10 11:31:17.285504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.211 [2024-12-10 11:31:17.285520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:50.211 [2024-12-10 11:31:17.285537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:23:50.211 [2024-12-10 11:31:17.285547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.211 [2024-12-10 11:31:17.320373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.211 [2024-12-10 11:31:17.320510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:50.211 [2024-12-10 11:31:17.320552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.861 ms 00:23:50.211 [2024-12-10 11:31:17.320561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.471 [2024-12-10 11:31:17.354461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.471 [2024-12-10 11:31:17.354496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:50.471 [2024-12-10 11:31:17.354511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.897 ms 00:23:50.471 [2024-12-10 11:31:17.354520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.471 [2024-12-10 11:31:17.387805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.471 [2024-12-10 11:31:17.387840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:50.471 [2024-12-10 11:31:17.387855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.298 ms 00:23:50.471 [2024-12-10 11:31:17.387864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.471 [2024-12-10 11:31:17.421812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.471 [2024-12-10 11:31:17.421847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:50.471 [2024-12-10 11:31:17.421865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.894 ms 00:23:50.471 [2024-12-10 11:31:17.421874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.471 [2024-12-10 11:31:17.421912] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:50.471 [2024-12-10 11:31:17.421947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:50.471 [2024-12-10 11:31:17.421978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:50.471 [2024-12-10 11:31:17.421988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:50.471 [2024-12-10 11:31:17.422001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:50.471 [2024-12-10 11:31:17.422012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:50.471 [2024-12-10 11:31:17.422042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:50.471 [2024-12-10 11:31:17.422053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:50.471 [2024-12-10 11:31:17.422066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:50.471 [2024-12-10 11:31:17.422076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:50.471 [2024-12-10 11:31:17.422090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:50.471 [2024-12-10 11:31:17.422100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.422995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:50.472 [2024-12-10 11:31:17.423232] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:50.472 [2024-12-10 11:31:17.423244] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0fed0ed2-5488-4d63-a538-76c7c2f5bd9d 00:23:50.472 [2024-12-10 11:31:17.423258] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:50.473 [2024-12-10 11:31:17.423271] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:50.473 [2024-12-10 11:31:17.423281] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:50.473 [2024-12-10 11:31:17.423294] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:50.473 [2024-12-10 11:31:17.423303] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:50.473 [2024-12-10 11:31:17.423316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:50.473 [2024-12-10 11:31:17.423326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:50.473 [2024-12-10 11:31:17.423340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:50.473 [2024-12-10 11:31:17.423349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:50.473 [2024-12-10 11:31:17.423361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.473 [2024-12-10 11:31:17.423371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:50.473 [2024-12-10 11:31:17.423385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.453 ms 00:23:50.473 [2024-12-10 11:31:17.423395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.473 [2024-12-10 11:31:17.441899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.473 [2024-12-10 11:31:17.442070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:50.473 [2024-12-10 11:31:17.442096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.468 ms 00:23:50.473 [2024-12-10 11:31:17.442107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.473 [2024-12-10 11:31:17.442684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.473 [2024-12-10 11:31:17.442702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:50.473 [2024-12-10 11:31:17.442717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:23:50.473 [2024-12-10 11:31:17.442727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.473 [2024-12-10 11:31:17.493585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.473 [2024-12-10 11:31:17.493744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:50.473 [2024-12-10 11:31:17.493772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.473 [2024-12-10 11:31:17.493783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.473 [2024-12-10 11:31:17.493839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.473 [2024-12-10 11:31:17.493850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:50.473 [2024-12-10 11:31:17.493863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.473 [2024-12-10 11:31:17.493873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.473 [2024-12-10 11:31:17.493988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.473 [2024-12-10 11:31:17.494003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:50.473 [2024-12-10 11:31:17.494017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.473 [2024-12-10 11:31:17.494027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.473 [2024-12-10 11:31:17.494047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.473 [2024-12-10 11:31:17.494058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:50.473 [2024-12-10 11:31:17.494071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.473 [2024-12-10 11:31:17.494081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.732 [2024-12-10 11:31:17.611725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.732 [2024-12-10 11:31:17.611773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:50.732 [2024-12-10 11:31:17.611794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.732 [2024-12-10 11:31:17.611804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.732 [2024-12-10 11:31:17.706503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.732 [2024-12-10 11:31:17.706546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:50.732 [2024-12-10 11:31:17.706563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.732 [2024-12-10 11:31:17.706573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.732 [2024-12-10 11:31:17.706684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.732 [2024-12-10 11:31:17.706697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:50.732 [2024-12-10 11:31:17.706710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.732 [2024-12-10 11:31:17.706721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.732 [2024-12-10 11:31:17.706767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.732 [2024-12-10 11:31:17.706778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:50.732 [2024-12-10 11:31:17.706791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.732 [2024-12-10 11:31:17.706801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.732 [2024-12-10 11:31:17.706911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.732 [2024-12-10 11:31:17.706952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:50.732 [2024-12-10 11:31:17.706968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.732 [2024-12-10 11:31:17.706979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.732 [2024-12-10 11:31:17.707023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.732 [2024-12-10 11:31:17.707055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:50.732 [2024-12-10 11:31:17.707068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.732 [2024-12-10 11:31:17.707077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.732 [2024-12-10 11:31:17.707133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.732 [2024-12-10 11:31:17.707147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:50.732 [2024-12-10 11:31:17.707160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.732 [2024-12-10 11:31:17.707180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.732 [2024-12-10 11:31:17.707227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.732 [2024-12-10 11:31:17.707240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:50.732 [2024-12-10 11:31:17.707253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.732 [2024-12-10 11:31:17.707264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.732 [2024-12-10 11:31:17.707390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 707.829 ms, result 0 00:23:50.732 true 00:23:50.732 11:31:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77859 00:23:50.732 11:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77859 ']' 00:23:50.732 11:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77859 00:23:50.732 11:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:23:50.732 11:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.732 11:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77859 00:23:50.732 killing process with pid 77859 00:23:50.732 Received shutdown signal, test time was about 4.000000 seconds 00:23:50.732 00:23:50.732 Latency(us) 00:23:50.732 [2024-12-10T11:31:17.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.732 [2024-12-10T11:31:17.846Z] =================================================================================================================== 00:23:50.732 [2024-12-10T11:31:17.846Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.732 11:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:50.732 11:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:50.732 11:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77859' 00:23:50.732 11:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77859 00:23:50.732 11:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77859 00:23:54.924 11:31:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:54.924 Remove shared memory files 00:23:54.924 11:31:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:23:54.924 11:31:21 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:54.924 11:31:21 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:23:54.924 11:31:21 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:23:54.924 11:31:21 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:23:54.924 11:31:21 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:54.924 11:31:21 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:23:54.924 ************************************ 00:23:54.924 END TEST ftl_bdevperf 00:23:54.924 ************************************ 00:23:54.924 00:23:54.924 real 0m27.937s 00:23:54.924 user 0m30.365s 00:23:54.924 sys 0m1.355s 00:23:54.924 11:31:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.924 11:31:21 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:54.924 11:31:21 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:54.924 11:31:21 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:54.924 11:31:21 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.924 11:31:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:54.924 ************************************ 00:23:54.924 START TEST ftl_trim 00:23:54.924 ************************************ 00:23:54.924 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:54.924 * Looking for test storage... 00:23:54.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:54.924 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:54.924 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:23:54.924 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:54.924 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.924 11:31:21 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:23:54.924 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.924 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:54.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.924 --rc genhtml_branch_coverage=1 00:23:54.924 --rc genhtml_function_coverage=1 00:23:54.924 --rc genhtml_legend=1 00:23:54.924 --rc geninfo_all_blocks=1 00:23:54.924 --rc geninfo_unexecuted_blocks=1 00:23:54.924 00:23:54.924 ' 00:23:54.924 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:54.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.924 --rc genhtml_branch_coverage=1 00:23:54.924 --rc genhtml_function_coverage=1 00:23:54.924 --rc genhtml_legend=1 00:23:54.925 --rc geninfo_all_blocks=1 00:23:54.925 --rc geninfo_unexecuted_blocks=1 00:23:54.925 00:23:54.925 ' 00:23:54.925 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:54.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.925 --rc genhtml_branch_coverage=1 00:23:54.925 --rc genhtml_function_coverage=1 00:23:54.925 --rc genhtml_legend=1 00:23:54.925 --rc geninfo_all_blocks=1 00:23:54.925 --rc geninfo_unexecuted_blocks=1 00:23:54.925 00:23:54.925 ' 00:23:54.925 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:54.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.925 --rc genhtml_branch_coverage=1 00:23:54.925 --rc genhtml_function_coverage=1 00:23:54.925 --rc genhtml_legend=1 00:23:54.925 --rc geninfo_all_blocks=1 00:23:54.925 --rc geninfo_unexecuted_blocks=1 00:23:54.925 00:23:54.925 ' 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:54.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78264 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78264 00:23:54.925 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78264 ']' 00:23:54.925 11:31:21 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:23:54.925 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.925 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.925 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.925 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.925 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:54.925 [2024-12-10 11:31:21.736686] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:54.925 [2024-12-10 11:31:21.736946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78264 ] 00:23:54.925 [2024-12-10 11:31:21.922313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:55.184 [2024-12-10 11:31:22.037126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.184 [2024-12-10 11:31:22.037207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.184 [2024-12-10 11:31:22.037241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.121 11:31:22 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.121 11:31:22 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:56.121 11:31:22 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:56.121 11:31:22 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:23:56.121 11:31:22 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:56.121 11:31:22 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:23:56.121 11:31:22 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:23:56.121 11:31:22 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:56.121 11:31:23 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:56.121 11:31:23 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:23:56.121 11:31:23 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:56.121 11:31:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:56.121 11:31:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:56.121 11:31:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:56.121 11:31:23 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:56.379 11:31:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:56.379 11:31:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:56.379 { 00:23:56.379 "name": "nvme0n1", 00:23:56.379 "aliases": [ 00:23:56.379 "de30fab2-3c11-4c95-ac9d-b54af1592469" 00:23:56.379 ], 00:23:56.379 "product_name": "NVMe disk", 00:23:56.379 "block_size": 4096, 00:23:56.379 "num_blocks": 1310720, 00:23:56.379 "uuid": "de30fab2-3c11-4c95-ac9d-b54af1592469", 00:23:56.379 "numa_id": -1, 00:23:56.379 "assigned_rate_limits": { 00:23:56.379 "rw_ios_per_sec": 0, 00:23:56.379 "rw_mbytes_per_sec": 0, 00:23:56.379 "r_mbytes_per_sec": 0, 00:23:56.379 "w_mbytes_per_sec": 0 00:23:56.379 }, 00:23:56.379 "claimed": true, 00:23:56.379 "claim_type": "read_many_write_one", 00:23:56.379 "zoned": false, 00:23:56.379 "supported_io_types": { 00:23:56.379 "read": true, 00:23:56.379 "write": true, 00:23:56.379 "unmap": true, 00:23:56.379 "flush": true, 00:23:56.379 "reset": true, 00:23:56.379 "nvme_admin": true, 00:23:56.379 "nvme_io": true, 00:23:56.379 "nvme_io_md": false, 00:23:56.379 "write_zeroes": true, 00:23:56.379 "zcopy": false, 00:23:56.379 "get_zone_info": false, 00:23:56.379 "zone_management": false, 00:23:56.379 "zone_append": false, 00:23:56.379 "compare": true, 00:23:56.379 "compare_and_write": false, 00:23:56.379 "abort": true, 00:23:56.379 "seek_hole": false, 00:23:56.379 "seek_data": false, 00:23:56.379 "copy": true, 00:23:56.379 "nvme_iov_md": false 00:23:56.379 }, 00:23:56.379 "driver_specific": { 00:23:56.379 "nvme": [ 00:23:56.379 { 00:23:56.379 "pci_address": "0000:00:11.0", 00:23:56.379 "trid": { 00:23:56.379 "trtype": "PCIe", 00:23:56.379 "traddr": "0000:00:11.0" 00:23:56.379 }, 00:23:56.379 "ctrlr_data": { 00:23:56.380 "cntlid": 0, 00:23:56.380 "vendor_id": "0x1b36", 00:23:56.380 "model_number": "QEMU NVMe Ctrl", 00:23:56.380 "serial_number": "12341", 00:23:56.380 "firmware_revision": "8.0.0", 00:23:56.380 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:56.380 "oacs": { 00:23:56.380 "security": 0, 00:23:56.380 "format": 1, 00:23:56.380 "firmware": 0, 00:23:56.380 "ns_manage": 1 00:23:56.380 }, 00:23:56.380 "multi_ctrlr": false, 00:23:56.380 "ana_reporting": false 00:23:56.380 }, 00:23:56.380 "vs": { 00:23:56.380 "nvme_version": "1.4" 00:23:56.380 }, 00:23:56.380 "ns_data": { 00:23:56.380 "id": 1, 00:23:56.380 "can_share": false 00:23:56.380 } 00:23:56.380 } 00:23:56.380 ], 00:23:56.380 "mp_policy": "active_passive" 00:23:56.380 } 00:23:56.380 } 00:23:56.380 ]' 00:23:56.380 11:31:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:56.380 11:31:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:56.380 11:31:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:56.638 11:31:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:56.638 11:31:23 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:56.638 11:31:23 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:23:56.638 11:31:23 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:23:56.638 11:31:23 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:56.638 11:31:23 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:23:56.638 11:31:23 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:56.638 11:31:23 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:56.638 11:31:23 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=aecda1d6-c3d7-412a-9c87-97299120f55c 00:23:56.638 11:31:23 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:23:56.639 11:31:23 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aecda1d6-c3d7-412a-9c87-97299120f55c 00:23:56.897 11:31:23 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:57.154 11:31:24 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=7f923bc7-3320-43d7-a1bf-53e701987368 00:23:57.154 11:31:24 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7f923bc7-3320-43d7-a1bf-53e701987368 00:23:57.411 11:31:24 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=ef66e189-fffc-4a63-a1ae-4bf7be050cff 00:23:57.411 11:31:24 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ef66e189-fffc-4a63-a1ae-4bf7be050cff 00:23:57.411 11:31:24 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:23:57.411 11:31:24 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:57.411 11:31:24 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=ef66e189-fffc-4a63-a1ae-4bf7be050cff 00:23:57.411 11:31:24 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:23:57.411 11:31:24 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size ef66e189-fffc-4a63-a1ae-4bf7be050cff 00:23:57.411 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ef66e189-fffc-4a63-a1ae-4bf7be050cff 00:23:57.411 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:57.411 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:57.411 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:57.411 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef66e189-fffc-4a63-a1ae-4bf7be050cff 00:23:57.673 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:57.673 { 00:23:57.673 "name": "ef66e189-fffc-4a63-a1ae-4bf7be050cff", 00:23:57.673 "aliases": [ 00:23:57.673 "lvs/nvme0n1p0" 00:23:57.673 ], 00:23:57.673 "product_name": "Logical Volume", 00:23:57.673 "block_size": 4096, 00:23:57.673 "num_blocks": 26476544, 00:23:57.673 "uuid": "ef66e189-fffc-4a63-a1ae-4bf7be050cff", 00:23:57.673 "assigned_rate_limits": { 00:23:57.673 "rw_ios_per_sec": 0, 00:23:57.673 "rw_mbytes_per_sec": 0, 00:23:57.673 "r_mbytes_per_sec": 0, 00:23:57.673 "w_mbytes_per_sec": 0 00:23:57.673 }, 00:23:57.673 "claimed": false, 00:23:57.673 "zoned": false, 00:23:57.673 "supported_io_types": { 00:23:57.674 "read": true, 00:23:57.674 "write": true, 00:23:57.674 "unmap": true, 00:23:57.674 "flush": false, 00:23:57.674 "reset": true, 00:23:57.674 "nvme_admin": false, 00:23:57.674 "nvme_io": false, 00:23:57.674 "nvme_io_md": false, 00:23:57.674 "write_zeroes": true, 00:23:57.674 "zcopy": false, 00:23:57.674 "get_zone_info": false, 00:23:57.674 "zone_management": false, 00:23:57.674 "zone_append": false, 00:23:57.674 "compare": false, 00:23:57.674 "compare_and_write": false, 00:23:57.674 "abort": false, 00:23:57.674 "seek_hole": true, 00:23:57.674 "seek_data": true, 00:23:57.674 "copy": false, 00:23:57.674 "nvme_iov_md": false 00:23:57.674 }, 00:23:57.674 "driver_specific": { 00:23:57.674 "lvol": { 00:23:57.674 "lvol_store_uuid": "7f923bc7-3320-43d7-a1bf-53e701987368", 00:23:57.674 "base_bdev": "nvme0n1", 00:23:57.674 "thin_provision": true, 00:23:57.674 "num_allocated_clusters": 0, 00:23:57.674 "snapshot": false, 00:23:57.674 "clone": false, 00:23:57.674 "esnap_clone": false 00:23:57.674 } 00:23:57.674 } 00:23:57.674 } 00:23:57.674 ]' 00:23:57.674 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:57.674 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:57.674 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:57.674 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:57.674 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:57.674 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:57.674 11:31:24 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:23:57.674 11:31:24 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:23:57.674 11:31:24 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:57.949 11:31:24 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:57.949 11:31:24 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:57.949 11:31:24 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size ef66e189-fffc-4a63-a1ae-4bf7be050cff 00:23:57.949 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ef66e189-fffc-4a63-a1ae-4bf7be050cff 00:23:57.949 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:57.949 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:57.949 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:57.949 11:31:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef66e189-fffc-4a63-a1ae-4bf7be050cff 00:23:58.241 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:58.241 { 00:23:58.241 "name": "ef66e189-fffc-4a63-a1ae-4bf7be050cff", 00:23:58.241 "aliases": [ 00:23:58.241 "lvs/nvme0n1p0" 00:23:58.241 ], 00:23:58.241 "product_name": "Logical Volume", 00:23:58.241 "block_size": 4096, 00:23:58.241 "num_blocks": 26476544, 00:23:58.241 "uuid": "ef66e189-fffc-4a63-a1ae-4bf7be050cff", 00:23:58.241 "assigned_rate_limits": { 00:23:58.241 "rw_ios_per_sec": 0, 00:23:58.241 "rw_mbytes_per_sec": 0, 00:23:58.241 "r_mbytes_per_sec": 0, 00:23:58.241 "w_mbytes_per_sec": 0 00:23:58.241 }, 00:23:58.241 "claimed": false, 00:23:58.241 "zoned": false, 00:23:58.241 "supported_io_types": { 00:23:58.241 "read": true, 00:23:58.241 "write": true, 00:23:58.241 "unmap": true, 00:23:58.241 "flush": false, 00:23:58.241 "reset": true, 00:23:58.241 "nvme_admin": false, 00:23:58.241 "nvme_io": false, 00:23:58.241 "nvme_io_md": false, 00:23:58.241 "write_zeroes": true, 00:23:58.241 "zcopy": false, 00:23:58.241 "get_zone_info": false, 00:23:58.241 "zone_management": false, 00:23:58.241 "zone_append": false, 00:23:58.241 "compare": false, 00:23:58.241 "compare_and_write": false, 00:23:58.241 "abort": false, 00:23:58.241 "seek_hole": true, 00:23:58.241 "seek_data": true, 00:23:58.241 "copy": false, 00:23:58.241 "nvme_iov_md": false 00:23:58.241 }, 00:23:58.241 "driver_specific": { 00:23:58.241 "lvol": { 00:23:58.241 "lvol_store_uuid": "7f923bc7-3320-43d7-a1bf-53e701987368", 00:23:58.241 "base_bdev": "nvme0n1", 00:23:58.241 "thin_provision": true, 00:23:58.241 "num_allocated_clusters": 0, 00:23:58.241 "snapshot": false, 00:23:58.241 "clone": false, 00:23:58.241 "esnap_clone": false 00:23:58.241 } 00:23:58.241 } 00:23:58.241 } 00:23:58.241 ]' 00:23:58.241 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:58.241 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:58.241 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:58.241 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:58.241 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:58.241 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:58.241 11:31:25 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:23:58.241 11:31:25 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:58.499 11:31:25 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:23:58.499 11:31:25 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:23:58.499 11:31:25 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size ef66e189-fffc-4a63-a1ae-4bf7be050cff 00:23:58.499 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ef66e189-fffc-4a63-a1ae-4bf7be050cff 00:23:58.499 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:58.499 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:58.499 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:58.499 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef66e189-fffc-4a63-a1ae-4bf7be050cff 00:23:58.758 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:58.758 { 00:23:58.758 "name": "ef66e189-fffc-4a63-a1ae-4bf7be050cff", 00:23:58.758 "aliases": [ 00:23:58.758 "lvs/nvme0n1p0" 00:23:58.758 ], 00:23:58.758 "product_name": "Logical Volume", 00:23:58.758 "block_size": 4096, 00:23:58.758 "num_blocks": 26476544, 00:23:58.758 "uuid": "ef66e189-fffc-4a63-a1ae-4bf7be050cff", 00:23:58.758 "assigned_rate_limits": { 00:23:58.758 "rw_ios_per_sec": 0, 00:23:58.758 "rw_mbytes_per_sec": 0, 00:23:58.758 "r_mbytes_per_sec": 0, 00:23:58.758 "w_mbytes_per_sec": 0 00:23:58.758 }, 00:23:58.758 "claimed": false, 00:23:58.758 "zoned": false, 00:23:58.758 "supported_io_types": { 00:23:58.758 "read": true, 00:23:58.758 "write": true, 00:23:58.758 "unmap": true, 00:23:58.758 "flush": false, 00:23:58.758 "reset": true, 00:23:58.758 "nvme_admin": false, 00:23:58.758 "nvme_io": false, 00:23:58.758 "nvme_io_md": false, 00:23:58.758 "write_zeroes": true, 00:23:58.758 "zcopy": false, 00:23:58.758 "get_zone_info": false, 00:23:58.758 "zone_management": false, 00:23:58.758 "zone_append": false, 00:23:58.758 "compare": false, 00:23:58.758 "compare_and_write": false, 00:23:58.758 "abort": false, 00:23:58.758 "seek_hole": true, 00:23:58.758 "seek_data": true, 00:23:58.758 "copy": false, 00:23:58.758 "nvme_iov_md": false 00:23:58.758 }, 00:23:58.758 "driver_specific": { 00:23:58.758 "lvol": { 00:23:58.758 "lvol_store_uuid": "7f923bc7-3320-43d7-a1bf-53e701987368", 00:23:58.759 "base_bdev": "nvme0n1", 00:23:58.759 "thin_provision": true, 00:23:58.759 "num_allocated_clusters": 0, 00:23:58.759 "snapshot": false, 00:23:58.759 "clone": false, 00:23:58.759 "esnap_clone": false 00:23:58.759 } 00:23:58.759 } 00:23:58.759 } 00:23:58.759 ]' 00:23:58.759 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:58.759 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:58.759 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:58.759 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:58.759 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:58.759 11:31:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:58.759 11:31:25 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:23:58.759 11:31:25 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ef66e189-fffc-4a63-a1ae-4bf7be050cff -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:23:59.018 [2024-12-10 11:31:25.907942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.018 [2024-12-10 11:31:25.908005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:59.018 [2024-12-10 11:31:25.908044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:59.018 [2024-12-10 11:31:25.908057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.018 [2024-12-10 11:31:25.911943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.018 [2024-12-10 11:31:25.911987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:59.018 [2024-12-10 11:31:25.912004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.848 ms 00:23:59.018 [2024-12-10 11:31:25.912015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.018 [2024-12-10 11:31:25.912181] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:59.018 [2024-12-10 11:31:25.913274] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:59.018 [2024-12-10 11:31:25.913447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.018 [2024-12-10 11:31:25.913466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:59.018 [2024-12-10 11:31:25.913480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:23:59.018 [2024-12-10 11:31:25.913492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.018 [2024-12-10 11:31:25.913656] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6a8cf975-d8b5-43ec-a656-efac3c3b89a7 00:23:59.018 [2024-12-10 11:31:25.916051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.018 [2024-12-10 11:31:25.916089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:59.018 [2024-12-10 11:31:25.916103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:59.018 [2024-12-10 11:31:25.916117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.018 [2024-12-10 11:31:25.929551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.018 [2024-12-10 11:31:25.929588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:59.018 [2024-12-10 11:31:25.929605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.327 ms 00:23:59.018 [2024-12-10 11:31:25.929619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.018 [2024-12-10 11:31:25.929791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.018 [2024-12-10 11:31:25.929810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:59.018 [2024-12-10 11:31:25.929821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:23:59.018 [2024-12-10 11:31:25.929840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.018 [2024-12-10 11:31:25.929885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.018 [2024-12-10 11:31:25.929899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:59.018 [2024-12-10 11:31:25.929911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:59.018 [2024-12-10 11:31:25.929950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.018 [2024-12-10 11:31:25.930002] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:59.018 [2024-12-10 11:31:25.936199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.018 [2024-12-10 11:31:25.936234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:59.018 [2024-12-10 11:31:25.936251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.212 ms 00:23:59.018 [2024-12-10 11:31:25.936278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.018 [2024-12-10 11:31:25.936362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.018 [2024-12-10 11:31:25.936393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:59.018 [2024-12-10 11:31:25.936408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:59.018 [2024-12-10 11:31:25.936419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.018 [2024-12-10 11:31:25.936463] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:59.019 [2024-12-10 11:31:25.936605] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:59.019 [2024-12-10 11:31:25.936627] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:59.019 [2024-12-10 11:31:25.936641] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:59.019 [2024-12-10 11:31:25.936658] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:59.019 [2024-12-10 11:31:25.936671] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:59.019 [2024-12-10 11:31:25.936685] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:59.019 [2024-12-10 11:31:25.936696] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:59.019 [2024-12-10 11:31:25.936711] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:59.019 [2024-12-10 11:31:25.936724] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:59.019 [2024-12-10 11:31:25.936738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.019 [2024-12-10 11:31:25.936748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:59.019 [2024-12-10 11:31:25.936763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:23:59.019 [2024-12-10 11:31:25.936774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.019 [2024-12-10 11:31:25.936882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.019 [2024-12-10 11:31:25.936893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:59.019 [2024-12-10 11:31:25.936932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:59.019 [2024-12-10 11:31:25.936943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.019 [2024-12-10 11:31:25.937112] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:59.019 [2024-12-10 11:31:25.937127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:59.019 [2024-12-10 11:31:25.937142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.019 [2024-12-10 11:31:25.937153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.019 [2024-12-10 11:31:25.937166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:59.019 [2024-12-10 11:31:25.937175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:59.019 [2024-12-10 11:31:25.937188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:59.019 [2024-12-10 11:31:25.937197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:59.019 [2024-12-10 11:31:25.937210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:59.019 [2024-12-10 11:31:25.937220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.019 [2024-12-10 11:31:25.937234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:59.019 [2024-12-10 11:31:25.937244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:59.019 [2024-12-10 11:31:25.937256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.019 [2024-12-10 11:31:25.937265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:59.019 [2024-12-10 11:31:25.937277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:59.019 [2024-12-10 11:31:25.937287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.019 [2024-12-10 11:31:25.937307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:59.019 [2024-12-10 11:31:25.937316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:59.019 [2024-12-10 11:31:25.937329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.019 [2024-12-10 11:31:25.937339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:59.019 [2024-12-10 11:31:25.937351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:59.019 [2024-12-10 11:31:25.937360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.019 [2024-12-10 11:31:25.937372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:59.019 [2024-12-10 11:31:25.937382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:59.019 [2024-12-10 11:31:25.937394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.019 [2024-12-10 11:31:25.937412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:59.019 [2024-12-10 11:31:25.937425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:59.019 [2024-12-10 11:31:25.937434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.019 [2024-12-10 11:31:25.937446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:59.019 [2024-12-10 11:31:25.937455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:59.019 [2024-12-10 11:31:25.937467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.019 [2024-12-10 11:31:25.937477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:59.019 [2024-12-10 11:31:25.937492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:59.019 [2024-12-10 11:31:25.937501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.019 [2024-12-10 11:31:25.937513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:59.019 [2024-12-10 11:31:25.937522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:59.019 [2024-12-10 11:31:25.937536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.019 [2024-12-10 11:31:25.937545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:59.019 [2024-12-10 11:31:25.937557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:59.019 [2024-12-10 11:31:25.937566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.019 [2024-12-10 11:31:25.937577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:59.019 [2024-12-10 11:31:25.937587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:59.019 [2024-12-10 11:31:25.937598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.019 [2024-12-10 11:31:25.937607] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:59.019 [2024-12-10 11:31:25.937621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:59.019 [2024-12-10 11:31:25.937631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.019 [2024-12-10 11:31:25.937644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.019 [2024-12-10 11:31:25.937655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:59.019 [2024-12-10 11:31:25.937674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:59.019 [2024-12-10 11:31:25.937683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:59.019 [2024-12-10 11:31:25.937696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:59.019 [2024-12-10 11:31:25.937705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:59.019 [2024-12-10 11:31:25.937717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:59.019 [2024-12-10 11:31:25.937729] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:59.019 [2024-12-10 11:31:25.937745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.019 [2024-12-10 11:31:25.937761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:59.019 [2024-12-10 11:31:25.937774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:59.019 [2024-12-10 11:31:25.937785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:59.019 [2024-12-10 11:31:25.937799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:59.019 [2024-12-10 11:31:25.937809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:59.019 [2024-12-10 11:31:25.937822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:59.019 [2024-12-10 11:31:25.937833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:59.019 [2024-12-10 11:31:25.937847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:59.019 [2024-12-10 11:31:25.937858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:59.019 [2024-12-10 11:31:25.937874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:59.019 [2024-12-10 11:31:25.937885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:59.019 [2024-12-10 11:31:25.937898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:59.019 [2024-12-10 11:31:25.937909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:59.019 [2024-12-10 11:31:25.937934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:59.019 [2024-12-10 11:31:25.937944] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:59.019 [2024-12-10 11:31:25.937963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.019 [2024-12-10 11:31:25.937974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:59.019 [2024-12-10 11:31:25.937988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:59.019 [2024-12-10 11:31:25.937999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:59.019 [2024-12-10 11:31:25.938012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:59.019 [2024-12-10 11:31:25.938023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.019 [2024-12-10 11:31:25.938037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:59.019 [2024-12-10 11:31:25.938048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:23:59.019 [2024-12-10 11:31:25.938061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.019 [2024-12-10 11:31:25.938173] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:59.019 [2024-12-10 11:31:25.938196] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:03.211 [2024-12-10 11:31:29.988993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.211 [2024-12-10 11:31:29.989075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:03.211 [2024-12-10 11:31:29.989110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4057.393 ms 00:24:03.211 [2024-12-10 11:31:29.989125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.211 [2024-12-10 11:31:30.038796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.211 [2024-12-10 11:31:30.038879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:03.211 [2024-12-10 11:31:30.038900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.354 ms 00:24:03.211 [2024-12-10 11:31:30.038930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.211 [2024-12-10 11:31:30.039144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.211 [2024-12-10 11:31:30.039167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:03.211 [2024-12-10 11:31:30.039207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:03.211 [2024-12-10 11:31:30.039230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.211 [2024-12-10 11:31:30.107322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.211 [2024-12-10 11:31:30.107392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:03.211 [2024-12-10 11:31:30.107425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.139 ms 00:24:03.211 [2024-12-10 11:31:30.107442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.211 [2024-12-10 11:31:30.107626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.211 [2024-12-10 11:31:30.107646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:03.211 [2024-12-10 11:31:30.107659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:03.211 [2024-12-10 11:31:30.107674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.211 [2024-12-10 11:31:30.108449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.211 [2024-12-10 11:31:30.108479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:03.211 [2024-12-10 11:31:30.108491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:24:03.211 [2024-12-10 11:31:30.108508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.211 [2024-12-10 11:31:30.108649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.211 [2024-12-10 11:31:30.108667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:03.211 [2024-12-10 11:31:30.108700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:24:03.211 [2024-12-10 11:31:30.108723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.211 [2024-12-10 11:31:30.135653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.211 [2024-12-10 11:31:30.135712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:03.211 [2024-12-10 11:31:30.135745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.925 ms 00:24:03.211 [2024-12-10 11:31:30.135762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.211 [2024-12-10 11:31:30.150327] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:03.211 [2024-12-10 11:31:30.176269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.211 [2024-12-10 11:31:30.176339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:03.211 [2024-12-10 11:31:30.176378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.357 ms 00:24:03.211 [2024-12-10 11:31:30.176390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.211 [2024-12-10 11:31:30.297882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.211 [2024-12-10 11:31:30.298004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:03.211 [2024-12-10 11:31:30.298048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 121.525 ms 00:24:03.211 [2024-12-10 11:31:30.298060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.211 [2024-12-10 11:31:30.298364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.211 [2024-12-10 11:31:30.298381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:03.211 [2024-12-10 11:31:30.298405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:24:03.211 [2024-12-10 11:31:30.298416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.470 [2024-12-10 11:31:30.335704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.470 [2024-12-10 11:31:30.335750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:03.470 [2024-12-10 11:31:30.335788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.293 ms 00:24:03.470 [2024-12-10 11:31:30.335799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.470 [2024-12-10 11:31:30.372171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.470 [2024-12-10 11:31:30.372214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:03.470 [2024-12-10 11:31:30.372253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.323 ms 00:24:03.470 [2024-12-10 11:31:30.372264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.470 [2024-12-10 11:31:30.373155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.470 [2024-12-10 11:31:30.373181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:03.470 [2024-12-10 11:31:30.373199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:24:03.470 [2024-12-10 11:31:30.373211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.470 [2024-12-10 11:31:30.488007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.470 [2024-12-10 11:31:30.488092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:03.470 [2024-12-10 11:31:30.488125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.912 ms 00:24:03.470 [2024-12-10 11:31:30.488137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.470 [2024-12-10 11:31:30.528202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.470 [2024-12-10 11:31:30.528276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:03.470 [2024-12-10 11:31:30.528316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.966 ms 00:24:03.470 [2024-12-10 11:31:30.528329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.470 [2024-12-10 11:31:30.565380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.470 [2024-12-10 11:31:30.565434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:03.470 [2024-12-10 11:31:30.565456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.982 ms 00:24:03.470 [2024-12-10 11:31:30.565483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.729 [2024-12-10 11:31:30.602457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.729 [2024-12-10 11:31:30.602522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:03.729 [2024-12-10 11:31:30.602543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.918 ms 00:24:03.729 [2024-12-10 11:31:30.602555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.729 [2024-12-10 11:31:30.602685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.729 [2024-12-10 11:31:30.602705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:03.729 [2024-12-10 11:31:30.602729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:03.729 [2024-12-10 11:31:30.602740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.729 [2024-12-10 11:31:30.602858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.729 [2024-12-10 11:31:30.602870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:03.729 [2024-12-10 11:31:30.602887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:03.729 [2024-12-10 11:31:30.602899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.729 [2024-12-10 11:31:30.604427] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:03.729 [2024-12-10 11:31:30.609084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4703.692 ms, result 0 00:24:03.729 [2024-12-10 11:31:30.610231] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:03.729 { 00:24:03.729 "name": "ftl0", 00:24:03.729 "uuid": "6a8cf975-d8b5-43ec-a656-efac3c3b89a7" 00:24:03.729 } 00:24:03.729 11:31:30 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:24:03.729 11:31:30 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:24:03.729 11:31:30 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:03.729 11:31:30 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:24:03.729 11:31:30 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:03.729 11:31:30 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:03.729 11:31:30 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:03.988 11:31:30 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:24:03.988 [ 00:24:03.988 { 00:24:03.988 "name": "ftl0", 00:24:03.988 "aliases": [ 00:24:03.988 "6a8cf975-d8b5-43ec-a656-efac3c3b89a7" 00:24:03.988 ], 00:24:03.988 "product_name": "FTL disk", 00:24:03.988 "block_size": 4096, 00:24:03.988 "num_blocks": 23592960, 00:24:03.988 "uuid": "6a8cf975-d8b5-43ec-a656-efac3c3b89a7", 00:24:03.988 "assigned_rate_limits": { 00:24:03.988 "rw_ios_per_sec": 0, 00:24:03.988 "rw_mbytes_per_sec": 0, 00:24:03.988 "r_mbytes_per_sec": 0, 00:24:03.988 "w_mbytes_per_sec": 0 00:24:03.988 }, 00:24:03.988 "claimed": false, 00:24:03.988 "zoned": false, 00:24:03.988 "supported_io_types": { 00:24:03.988 "read": true, 00:24:03.988 "write": true, 00:24:03.988 "unmap": true, 00:24:03.988 "flush": true, 00:24:03.988 "reset": false, 00:24:03.988 "nvme_admin": false, 00:24:03.988 "nvme_io": false, 00:24:03.988 "nvme_io_md": false, 00:24:03.988 "write_zeroes": true, 00:24:03.988 "zcopy": false, 00:24:03.988 "get_zone_info": false, 00:24:03.988 "zone_management": false, 00:24:03.988 "zone_append": false, 00:24:03.988 "compare": false, 00:24:03.988 "compare_and_write": false, 00:24:03.988 "abort": false, 00:24:03.988 "seek_hole": false, 00:24:03.988 "seek_data": false, 00:24:03.988 "copy": false, 00:24:03.988 "nvme_iov_md": false 00:24:03.988 }, 00:24:03.988 "driver_specific": { 00:24:03.988 "ftl": { 00:24:03.988 "base_bdev": "ef66e189-fffc-4a63-a1ae-4bf7be050cff", 00:24:03.988 "cache": "nvc0n1p0" 00:24:03.988 } 00:24:03.988 } 00:24:03.988 } 00:24:03.988 ] 00:24:03.988 11:31:31 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:24:03.988 11:31:31 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:24:03.988 11:31:31 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:04.246 11:31:31 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:24:04.246 11:31:31 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:24:04.505 11:31:31 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:24:04.505 { 00:24:04.505 "name": "ftl0", 00:24:04.505 "aliases": [ 00:24:04.505 "6a8cf975-d8b5-43ec-a656-efac3c3b89a7" 00:24:04.505 ], 00:24:04.505 "product_name": "FTL disk", 00:24:04.505 "block_size": 4096, 00:24:04.505 "num_blocks": 23592960, 00:24:04.505 "uuid": "6a8cf975-d8b5-43ec-a656-efac3c3b89a7", 00:24:04.505 "assigned_rate_limits": { 00:24:04.505 "rw_ios_per_sec": 0, 00:24:04.505 "rw_mbytes_per_sec": 0, 00:24:04.505 "r_mbytes_per_sec": 0, 00:24:04.505 "w_mbytes_per_sec": 0 00:24:04.505 }, 00:24:04.505 "claimed": false, 00:24:04.505 "zoned": false, 00:24:04.505 "supported_io_types": { 00:24:04.505 "read": true, 00:24:04.505 "write": true, 00:24:04.505 "unmap": true, 00:24:04.505 "flush": true, 00:24:04.505 "reset": false, 00:24:04.505 "nvme_admin": false, 00:24:04.505 "nvme_io": false, 00:24:04.505 "nvme_io_md": false, 00:24:04.505 "write_zeroes": true, 00:24:04.505 "zcopy": false, 00:24:04.505 "get_zone_info": false, 00:24:04.505 "zone_management": false, 00:24:04.505 "zone_append": false, 00:24:04.505 "compare": false, 00:24:04.505 "compare_and_write": false, 00:24:04.505 "abort": false, 00:24:04.505 "seek_hole": false, 00:24:04.505 "seek_data": false, 00:24:04.505 "copy": false, 00:24:04.505 "nvme_iov_md": false 00:24:04.505 }, 00:24:04.505 "driver_specific": { 00:24:04.505 "ftl": { 00:24:04.505 "base_bdev": "ef66e189-fffc-4a63-a1ae-4bf7be050cff", 00:24:04.505 "cache": "nvc0n1p0" 00:24:04.505 } 00:24:04.505 } 00:24:04.505 } 00:24:04.505 ]' 00:24:04.505 11:31:31 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:24:04.505 11:31:31 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:24:04.505 11:31:31 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:04.765 [2024-12-10 11:31:31.649636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.765 [2024-12-10 11:31:31.649817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:04.765 [2024-12-10 11:31:31.649851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:04.765 [2024-12-10 11:31:31.649878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.765 [2024-12-10 11:31:31.649959] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:04.765 [2024-12-10 11:31:31.654581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.765 [2024-12-10 11:31:31.654618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:04.765 [2024-12-10 11:31:31.654643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.598 ms 00:24:04.765 [2024-12-10 11:31:31.654654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.765 [2024-12-10 11:31:31.655389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.765 [2024-12-10 11:31:31.655412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:04.765 [2024-12-10 11:31:31.655430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:24:04.765 [2024-12-10 11:31:31.655441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.765 [2024-12-10 11:31:31.658342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.765 [2024-12-10 11:31:31.658372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:04.765 [2024-12-10 11:31:31.658390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.862 ms 00:24:04.765 [2024-12-10 11:31:31.658401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.765 [2024-12-10 11:31:31.664152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.765 [2024-12-10 11:31:31.664305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:04.765 [2024-12-10 11:31:31.664351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.694 ms 00:24:04.765 [2024-12-10 11:31:31.664362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.765 [2024-12-10 11:31:31.702599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.765 [2024-12-10 11:31:31.702639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:04.765 [2024-12-10 11:31:31.702667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.181 ms 00:24:04.765 [2024-12-10 11:31:31.702677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.765 [2024-12-10 11:31:31.725585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.765 [2024-12-10 11:31:31.725624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:04.765 [2024-12-10 11:31:31.725643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.838 ms 00:24:04.765 [2024-12-10 11:31:31.725674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.765 [2024-12-10 11:31:31.725980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.765 [2024-12-10 11:31:31.725996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:04.766 [2024-12-10 11:31:31.726026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:24:04.766 [2024-12-10 11:31:31.726038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.766 [2024-12-10 11:31:31.762086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.766 [2024-12-10 11:31:31.762122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:04.766 [2024-12-10 11:31:31.762140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.064 ms 00:24:04.766 [2024-12-10 11:31:31.762167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.766 [2024-12-10 11:31:31.797516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.766 [2024-12-10 11:31:31.797652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:04.766 [2024-12-10 11:31:31.797697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.302 ms 00:24:04.766 [2024-12-10 11:31:31.797708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.766 [2024-12-10 11:31:31.832964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.766 [2024-12-10 11:31:31.832998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:04.766 [2024-12-10 11:31:31.833013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.145 ms 00:24:04.766 [2024-12-10 11:31:31.833039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.766 [2024-12-10 11:31:31.867769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.766 [2024-12-10 11:31:31.867804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:04.766 [2024-12-10 11:31:31.867821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.620 ms 00:24:04.766 [2024-12-10 11:31:31.867846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.766 [2024-12-10 11:31:31.867963] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:04.766 [2024-12-10 11:31:31.867982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.867999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.868991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.869002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.869018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.869029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:04.766 [2024-12-10 11:31:31.869046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:04.767 [2024-12-10 11:31:31.869411] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:04.767 [2024-12-10 11:31:31.869432] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a8cf975-d8b5-43ec-a656-efac3c3b89a7 00:24:04.767 [2024-12-10 11:31:31.869444] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:04.767 [2024-12-10 11:31:31.869460] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:04.767 [2024-12-10 11:31:31.869470] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:04.767 [2024-12-10 11:31:31.869492] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:04.767 [2024-12-10 11:31:31.869503] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:04.767 [2024-12-10 11:31:31.869519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:04.767 [2024-12-10 11:31:31.869530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:04.767 [2024-12-10 11:31:31.869544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:04.767 [2024-12-10 11:31:31.869553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:04.767 [2024-12-10 11:31:31.869568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.767 [2024-12-10 11:31:31.869580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:04.767 [2024-12-10 11:31:31.869596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.608 ms 00:24:04.767 [2024-12-10 11:31:31.869608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.026 [2024-12-10 11:31:31.890999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.026 [2024-12-10 11:31:31.891038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:05.026 [2024-12-10 11:31:31.891077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.372 ms 00:24:05.026 [2024-12-10 11:31:31.891088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.026 [2024-12-10 11:31:31.891726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.026 [2024-12-10 11:31:31.891742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:05.026 [2024-12-10 11:31:31.891759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:24:05.026 [2024-12-10 11:31:31.891770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.026 [2024-12-10 11:31:31.965575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.026 [2024-12-10 11:31:31.965613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:05.026 [2024-12-10 11:31:31.965633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.026 [2024-12-10 11:31:31.965661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.026 [2024-12-10 11:31:31.965818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.026 [2024-12-10 11:31:31.965832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:05.026 [2024-12-10 11:31:31.965850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.026 [2024-12-10 11:31:31.965861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.026 [2024-12-10 11:31:31.965970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.026 [2024-12-10 11:31:31.965985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:05.026 [2024-12-10 11:31:31.966014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.026 [2024-12-10 11:31:31.966037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.026 [2024-12-10 11:31:31.966085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.026 [2024-12-10 11:31:31.966096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:05.026 [2024-12-10 11:31:31.966114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.026 [2024-12-10 11:31:31.966124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.026 [2024-12-10 11:31:32.105526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.026 [2024-12-10 11:31:32.105733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:05.026 [2024-12-10 11:31:32.105764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.026 [2024-12-10 11:31:32.105777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.285 [2024-12-10 11:31:32.212836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.285 [2024-12-10 11:31:32.212895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:05.285 [2024-12-10 11:31:32.212946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.285 [2024-12-10 11:31:32.212959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.285 [2024-12-10 11:31:32.213167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.285 [2024-12-10 11:31:32.213181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:05.285 [2024-12-10 11:31:32.213205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.285 [2024-12-10 11:31:32.213222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.285 [2024-12-10 11:31:32.213312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.285 [2024-12-10 11:31:32.213340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:05.285 [2024-12-10 11:31:32.213356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.285 [2024-12-10 11:31:32.213367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.285 [2024-12-10 11:31:32.213532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.285 [2024-12-10 11:31:32.213548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:05.285 [2024-12-10 11:31:32.213564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.285 [2024-12-10 11:31:32.213581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.285 [2024-12-10 11:31:32.213664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.285 [2024-12-10 11:31:32.213682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:05.285 [2024-12-10 11:31:32.213699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.285 [2024-12-10 11:31:32.213709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.285 [2024-12-10 11:31:32.213787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.285 [2024-12-10 11:31:32.213804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:05.285 [2024-12-10 11:31:32.213827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.285 [2024-12-10 11:31:32.213837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.285 [2024-12-10 11:31:32.213945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.285 [2024-12-10 11:31:32.213959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:05.285 [2024-12-10 11:31:32.213976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.285 [2024-12-10 11:31:32.213986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.285 [2024-12-10 11:31:32.214271] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 565.507 ms, result 0 00:24:05.285 true 00:24:05.286 11:31:32 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78264 00:24:05.286 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78264 ']' 00:24:05.286 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78264 00:24:05.286 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:24:05.286 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.286 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78264 00:24:05.286 killing process with pid 78264 00:24:05.286 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:05.286 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:05.286 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78264' 00:24:05.286 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78264 00:24:05.286 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78264 00:24:08.575 11:31:35 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:24:09.513 65536+0 records in 00:24:09.513 65536+0 records out 00:24:09.513 268435456 bytes (268 MB, 256 MiB) copied, 1.00115 s, 268 MB/s 00:24:09.513 11:31:36 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:09.513 [2024-12-10 11:31:36.445147] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:09.513 [2024-12-10 11:31:36.445308] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78498 ] 00:24:09.773 [2024-12-10 11:31:36.628931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.773 [2024-12-10 11:31:36.770466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.341 [2024-12-10 11:31:37.201937] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:10.341 [2024-12-10 11:31:37.202233] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:10.341 [2024-12-10 11:31:37.388596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.342 [2024-12-10 11:31:37.388848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:10.342 [2024-12-10 11:31:37.388984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:10.342 [2024-12-10 11:31:37.389031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.342 [2024-12-10 11:31:37.392514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.342 [2024-12-10 11:31:37.392696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:10.342 [2024-12-10 11:31:37.392787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.431 ms 00:24:10.342 [2024-12-10 11:31:37.392827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.342 [2024-12-10 11:31:37.392983] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:10.342 [2024-12-10 11:31:37.394110] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:10.342 [2024-12-10 11:31:37.394300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.342 [2024-12-10 11:31:37.394386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:10.342 [2024-12-10 11:31:37.394428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.328 ms 00:24:10.342 [2024-12-10 11:31:37.394463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.342 [2024-12-10 11:31:37.396088] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:10.342 [2024-12-10 11:31:37.414939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.342 [2024-12-10 11:31:37.414979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:10.342 [2024-12-10 11:31:37.414995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.882 ms 00:24:10.342 [2024-12-10 11:31:37.415007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.342 [2024-12-10 11:31:37.415124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.342 [2024-12-10 11:31:37.415141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:10.342 [2024-12-10 11:31:37.415153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:10.342 [2024-12-10 11:31:37.415165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.342 [2024-12-10 11:31:37.421947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.342 [2024-12-10 11:31:37.422133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:10.342 [2024-12-10 11:31:37.422156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.739 ms 00:24:10.342 [2024-12-10 11:31:37.422169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.342 [2024-12-10 11:31:37.422320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.342 [2024-12-10 11:31:37.422337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:10.342 [2024-12-10 11:31:37.422350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:24:10.342 [2024-12-10 11:31:37.422363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.342 [2024-12-10 11:31:37.422403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.342 [2024-12-10 11:31:37.422416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:10.342 [2024-12-10 11:31:37.422428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:10.342 [2024-12-10 11:31:37.422441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.342 [2024-12-10 11:31:37.422467] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:10.342 [2024-12-10 11:31:37.426950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.342 [2024-12-10 11:31:37.426986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:10.342 [2024-12-10 11:31:37.427000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.496 ms 00:24:10.342 [2024-12-10 11:31:37.427011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.342 [2024-12-10 11:31:37.427093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.342 [2024-12-10 11:31:37.427107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:10.342 [2024-12-10 11:31:37.427120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:10.342 [2024-12-10 11:31:37.427140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.342 [2024-12-10 11:31:37.427168] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:10.342 [2024-12-10 11:31:37.427199] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:10.342 [2024-12-10 11:31:37.427234] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:10.342 [2024-12-10 11:31:37.427253] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:10.342 [2024-12-10 11:31:37.427340] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:10.342 [2024-12-10 11:31:37.427356] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:10.342 [2024-12-10 11:31:37.427378] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:10.342 [2024-12-10 11:31:37.427393] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:10.342 [2024-12-10 11:31:37.427408] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:10.342 [2024-12-10 11:31:37.427421] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:10.342 [2024-12-10 11:31:37.427433] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:10.342 [2024-12-10 11:31:37.427444] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:10.342 [2024-12-10 11:31:37.427456] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:10.342 [2024-12-10 11:31:37.427468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.342 [2024-12-10 11:31:37.427479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:10.342 [2024-12-10 11:31:37.427506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:24:10.342 [2024-12-10 11:31:37.427518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.342 [2024-12-10 11:31:37.427600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.342 [2024-12-10 11:31:37.427614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:10.342 [2024-12-10 11:31:37.427626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:10.342 [2024-12-10 11:31:37.427637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.342 [2024-12-10 11:31:37.427721] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:10.342 [2024-12-10 11:31:37.427736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:10.342 [2024-12-10 11:31:37.427748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:10.342 [2024-12-10 11:31:37.427759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.342 [2024-12-10 11:31:37.427772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:10.342 [2024-12-10 11:31:37.427783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:10.342 [2024-12-10 11:31:37.427794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:10.342 [2024-12-10 11:31:37.427807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:10.342 [2024-12-10 11:31:37.427818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:10.342 [2024-12-10 11:31:37.427829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:10.342 [2024-12-10 11:31:37.427840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:10.342 [2024-12-10 11:31:37.427866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:10.342 [2024-12-10 11:31:37.427877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:10.342 [2024-12-10 11:31:37.427888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:10.342 [2024-12-10 11:31:37.427898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:10.342 [2024-12-10 11:31:37.427908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.342 [2024-12-10 11:31:37.427945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:10.342 [2024-12-10 11:31:37.427972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:10.342 [2024-12-10 11:31:37.427983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.342 [2024-12-10 11:31:37.427995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:10.342 [2024-12-10 11:31:37.428007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:10.342 [2024-12-10 11:31:37.428018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.342 [2024-12-10 11:31:37.428029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:10.342 [2024-12-10 11:31:37.428041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:10.342 [2024-12-10 11:31:37.428051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.342 [2024-12-10 11:31:37.428062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:10.342 [2024-12-10 11:31:37.428073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:10.342 [2024-12-10 11:31:37.428084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.342 [2024-12-10 11:31:37.428094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:10.342 [2024-12-10 11:31:37.428106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:10.342 [2024-12-10 11:31:37.428116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.342 [2024-12-10 11:31:37.428127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:10.342 [2024-12-10 11:31:37.428138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:10.342 [2024-12-10 11:31:37.428148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:10.342 [2024-12-10 11:31:37.428158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:10.342 [2024-12-10 11:31:37.428169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:10.342 [2024-12-10 11:31:37.428179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:10.342 [2024-12-10 11:31:37.428190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:10.342 [2024-12-10 11:31:37.428201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:10.342 [2024-12-10 11:31:37.428213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.342 [2024-12-10 11:31:37.428223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:10.342 [2024-12-10 11:31:37.428234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:10.342 [2024-12-10 11:31:37.428244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.343 [2024-12-10 11:31:37.428257] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:10.343 [2024-12-10 11:31:37.428273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:10.343 [2024-12-10 11:31:37.428285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:10.343 [2024-12-10 11:31:37.428295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.343 [2024-12-10 11:31:37.428308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:10.343 [2024-12-10 11:31:37.428320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:10.343 [2024-12-10 11:31:37.428332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:10.343 [2024-12-10 11:31:37.428343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:10.343 [2024-12-10 11:31:37.428354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:10.343 [2024-12-10 11:31:37.428365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:10.343 [2024-12-10 11:31:37.428377] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:10.343 [2024-12-10 11:31:37.428391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:10.343 [2024-12-10 11:31:37.428404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:10.343 [2024-12-10 11:31:37.428416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:10.343 [2024-12-10 11:31:37.428428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:10.343 [2024-12-10 11:31:37.428439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:10.343 [2024-12-10 11:31:37.428451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:10.343 [2024-12-10 11:31:37.428462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:10.343 [2024-12-10 11:31:37.428474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:10.343 [2024-12-10 11:31:37.428486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:10.343 [2024-12-10 11:31:37.428498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:10.343 [2024-12-10 11:31:37.428510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:10.343 [2024-12-10 11:31:37.428523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:10.343 [2024-12-10 11:31:37.428534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:10.343 [2024-12-10 11:31:37.428546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:10.343 [2024-12-10 11:31:37.428558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:10.343 [2024-12-10 11:31:37.428570] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:10.343 [2024-12-10 11:31:37.428582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:10.343 [2024-12-10 11:31:37.428600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:10.343 [2024-12-10 11:31:37.428612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:10.343 [2024-12-10 11:31:37.428623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:10.343 [2024-12-10 11:31:37.428636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:10.343 [2024-12-10 11:31:37.428650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.343 [2024-12-10 11:31:37.428662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:10.343 [2024-12-10 11:31:37.428674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:24:10.343 [2024-12-10 11:31:37.428685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.603 [2024-12-10 11:31:37.467464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.603 [2024-12-10 11:31:37.467503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:10.603 [2024-12-10 11:31:37.467518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.777 ms 00:24:10.603 [2024-12-10 11:31:37.467538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.603 [2024-12-10 11:31:37.467654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.603 [2024-12-10 11:31:37.467668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:10.603 [2024-12-10 11:31:37.467682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:10.603 [2024-12-10 11:31:37.467693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.603 [2024-12-10 11:31:37.551631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.603 [2024-12-10 11:31:37.551683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:10.603 [2024-12-10 11:31:37.551700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.046 ms 00:24:10.603 [2024-12-10 11:31:37.551713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.603 [2024-12-10 11:31:37.551829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.603 [2024-12-10 11:31:37.551844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:10.603 [2024-12-10 11:31:37.551857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:10.603 [2024-12-10 11:31:37.551867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.603 [2024-12-10 11:31:37.552346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.603 [2024-12-10 11:31:37.552366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:10.603 [2024-12-10 11:31:37.552387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:24:10.603 [2024-12-10 11:31:37.552397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.603 [2024-12-10 11:31:37.552520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.603 [2024-12-10 11:31:37.552535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:10.603 [2024-12-10 11:31:37.552548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:24:10.603 [2024-12-10 11:31:37.552558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.603 [2024-12-10 11:31:37.573512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.603 [2024-12-10 11:31:37.573555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:10.603 [2024-12-10 11:31:37.573569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.963 ms 00:24:10.603 [2024-12-10 11:31:37.573580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.603 [2024-12-10 11:31:37.592416] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:10.603 [2024-12-10 11:31:37.592474] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:10.603 [2024-12-10 11:31:37.592497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.603 [2024-12-10 11:31:37.592515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:10.603 [2024-12-10 11:31:37.592532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.814 ms 00:24:10.603 [2024-12-10 11:31:37.592549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.603 [2024-12-10 11:31:37.623005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.603 [2024-12-10 11:31:37.623046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:10.603 [2024-12-10 11:31:37.623062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.399 ms 00:24:10.603 [2024-12-10 11:31:37.623073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.603 [2024-12-10 11:31:37.641053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.603 [2024-12-10 11:31:37.641251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:10.603 [2024-12-10 11:31:37.641272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.922 ms 00:24:10.603 [2024-12-10 11:31:37.641284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.603 [2024-12-10 11:31:37.658250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.603 [2024-12-10 11:31:37.658286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:10.603 [2024-12-10 11:31:37.658298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.914 ms 00:24:10.603 [2024-12-10 11:31:37.658308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.603 [2024-12-10 11:31:37.659123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.603 [2024-12-10 11:31:37.659149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:10.603 [2024-12-10 11:31:37.659161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:24:10.603 [2024-12-10 11:31:37.659171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.863 [2024-12-10 11:31:37.740949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.863 [2024-12-10 11:31:37.741007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:10.863 [2024-12-10 11:31:37.741023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.880 ms 00:24:10.863 [2024-12-10 11:31:37.741035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.863 [2024-12-10 11:31:37.751116] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:10.863 [2024-12-10 11:31:37.767063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.863 [2024-12-10 11:31:37.767102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:10.863 [2024-12-10 11:31:37.767118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.980 ms 00:24:10.863 [2024-12-10 11:31:37.767129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.863 [2024-12-10 11:31:37.767257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.863 [2024-12-10 11:31:37.767270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:10.863 [2024-12-10 11:31:37.767282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:10.863 [2024-12-10 11:31:37.767292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.863 [2024-12-10 11:31:37.767343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.863 [2024-12-10 11:31:37.767354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:10.863 [2024-12-10 11:31:37.767364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:10.863 [2024-12-10 11:31:37.767373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.863 [2024-12-10 11:31:37.767413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.863 [2024-12-10 11:31:37.767431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:10.863 [2024-12-10 11:31:37.767442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:10.863 [2024-12-10 11:31:37.767451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.863 [2024-12-10 11:31:37.767492] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:10.863 [2024-12-10 11:31:37.767521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.863 [2024-12-10 11:31:37.767531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:10.863 [2024-12-10 11:31:37.767542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:10.863 [2024-12-10 11:31:37.767552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.863 [2024-12-10 11:31:37.803913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.863 [2024-12-10 11:31:37.803956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:10.863 [2024-12-10 11:31:37.803971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.398 ms 00:24:10.863 [2024-12-10 11:31:37.803982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.863 [2024-12-10 11:31:37.804121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.863 [2024-12-10 11:31:37.804139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:10.863 [2024-12-10 11:31:37.804151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:10.863 [2024-12-10 11:31:37.804162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.863 [2024-12-10 11:31:37.805195] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:10.863 [2024-12-10 11:31:37.809461] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 416.914 ms, result 0 00:24:10.863 [2024-12-10 11:31:37.810295] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:10.863 [2024-12-10 11:31:37.828974] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:11.798  [2024-12-10T11:31:39.849Z] Copying: 22/256 [MB] (22 MBps) [2024-12-10T11:31:41.228Z] Copying: 43/256 [MB] (20 MBps) [2024-12-10T11:31:42.178Z] Copying: 65/256 [MB] (22 MBps) [2024-12-10T11:31:43.116Z] Copying: 87/256 [MB] (21 MBps) [2024-12-10T11:31:44.054Z] Copying: 108/256 [MB] (20 MBps) [2024-12-10T11:31:44.990Z] Copying: 129/256 [MB] (21 MBps) [2024-12-10T11:31:45.929Z] Copying: 150/256 [MB] (20 MBps) [2024-12-10T11:31:46.865Z] Copying: 169/256 [MB] (19 MBps) [2024-12-10T11:31:48.244Z] Copying: 189/256 [MB] (19 MBps) [2024-12-10T11:31:49.182Z] Copying: 208/256 [MB] (19 MBps) [2024-12-10T11:31:50.118Z] Copying: 228/256 [MB] (19 MBps) [2024-12-10T11:31:50.377Z] Copying: 248/256 [MB] (20 MBps) [2024-12-10T11:31:50.377Z] Copying: 256/256 [MB] (average 20 MBps)[2024-12-10 11:31:50.155657] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:23.263 [2024-12-10 11:31:50.169959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.263 [2024-12-10 11:31:50.169997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:23.263 [2024-12-10 11:31:50.170011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:23.263 [2024-12-10 11:31:50.170031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.263 [2024-12-10 11:31:50.170053] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:23.263 [2024-12-10 11:31:50.174021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.263 [2024-12-10 11:31:50.174052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:23.263 [2024-12-10 11:31:50.174063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.959 ms 00:24:23.263 [2024-12-10 11:31:50.174073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.263 [2024-12-10 11:31:50.176232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.263 [2024-12-10 11:31:50.176269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:23.263 [2024-12-10 11:31:50.176281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.140 ms 00:24:23.263 [2024-12-10 11:31:50.176291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.263 [2024-12-10 11:31:50.183021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.263 [2024-12-10 11:31:50.183071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:23.263 [2024-12-10 11:31:50.183083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.721 ms 00:24:23.263 [2024-12-10 11:31:50.183092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.263 [2024-12-10 11:31:50.188384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.263 [2024-12-10 11:31:50.188428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:23.263 [2024-12-10 11:31:50.188440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.264 ms 00:24:23.263 [2024-12-10 11:31:50.188450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.263 [2024-12-10 11:31:50.222105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.263 [2024-12-10 11:31:50.222140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:23.263 [2024-12-10 11:31:50.222153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.675 ms 00:24:23.263 [2024-12-10 11:31:50.222162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.263 [2024-12-10 11:31:50.242081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.263 [2024-12-10 11:31:50.242129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:23.263 [2024-12-10 11:31:50.242149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.900 ms 00:24:23.263 [2024-12-10 11:31:50.242158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.263 [2024-12-10 11:31:50.242285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.263 [2024-12-10 11:31:50.242299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:23.263 [2024-12-10 11:31:50.242310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:23.263 [2024-12-10 11:31:50.242330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.263 [2024-12-10 11:31:50.276928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.263 [2024-12-10 11:31:50.276978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:23.263 [2024-12-10 11:31:50.276991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.637 ms 00:24:23.263 [2024-12-10 11:31:50.277000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.263 [2024-12-10 11:31:50.310554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.263 [2024-12-10 11:31:50.310588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:23.263 [2024-12-10 11:31:50.310600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.554 ms 00:24:23.263 [2024-12-10 11:31:50.310609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.263 [2024-12-10 11:31:50.343574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.263 [2024-12-10 11:31:50.343607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:23.263 [2024-12-10 11:31:50.343619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.967 ms 00:24:23.263 [2024-12-10 11:31:50.343628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.522 [2024-12-10 11:31:50.377206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.522 [2024-12-10 11:31:50.377240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:23.522 [2024-12-10 11:31:50.377252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.560 ms 00:24:23.522 [2024-12-10 11:31:50.377261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.522 [2024-12-10 11:31:50.377312] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:23.522 [2024-12-10 11:31:50.377328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:23.522 [2024-12-10 11:31:50.377550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.377997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:23.523 [2024-12-10 11:31:50.378383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:23.524 [2024-12-10 11:31:50.378401] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:23.524 [2024-12-10 11:31:50.378426] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a8cf975-d8b5-43ec-a656-efac3c3b89a7 00:24:23.524 [2024-12-10 11:31:50.378437] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:23.524 [2024-12-10 11:31:50.378446] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:23.524 [2024-12-10 11:31:50.378456] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:23.524 [2024-12-10 11:31:50.378466] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:23.524 [2024-12-10 11:31:50.378475] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:23.524 [2024-12-10 11:31:50.378486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:23.524 [2024-12-10 11:31:50.378495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:23.524 [2024-12-10 11:31:50.378504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:23.524 [2024-12-10 11:31:50.378513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:23.524 [2024-12-10 11:31:50.378523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.524 [2024-12-10 11:31:50.378540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:23.524 [2024-12-10 11:31:50.378550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.213 ms 00:24:23.524 [2024-12-10 11:31:50.378560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.524 [2024-12-10 11:31:50.396955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.524 [2024-12-10 11:31:50.396986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:23.524 [2024-12-10 11:31:50.396998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.404 ms 00:24:23.524 [2024-12-10 11:31:50.397008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.524 [2024-12-10 11:31:50.397525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.524 [2024-12-10 11:31:50.397538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:23.524 [2024-12-10 11:31:50.397550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:24:23.524 [2024-12-10 11:31:50.397559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.524 [2024-12-10 11:31:50.450203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.524 [2024-12-10 11:31:50.450236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:23.524 [2024-12-10 11:31:50.450248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.524 [2024-12-10 11:31:50.450258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.524 [2024-12-10 11:31:50.450339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.524 [2024-12-10 11:31:50.450350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:23.524 [2024-12-10 11:31:50.450360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.524 [2024-12-10 11:31:50.450370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.524 [2024-12-10 11:31:50.450418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.524 [2024-12-10 11:31:50.450430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:23.524 [2024-12-10 11:31:50.450440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.524 [2024-12-10 11:31:50.450450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.524 [2024-12-10 11:31:50.450466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.524 [2024-12-10 11:31:50.450484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:23.524 [2024-12-10 11:31:50.450494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.524 [2024-12-10 11:31:50.450503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.524 [2024-12-10 11:31:50.569671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.524 [2024-12-10 11:31:50.569714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:23.524 [2024-12-10 11:31:50.569728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.524 [2024-12-10 11:31:50.569738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.783 [2024-12-10 11:31:50.663756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.783 [2024-12-10 11:31:50.663800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:23.783 [2024-12-10 11:31:50.663814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.783 [2024-12-10 11:31:50.663824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.783 [2024-12-10 11:31:50.663893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.783 [2024-12-10 11:31:50.663905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:23.783 [2024-12-10 11:31:50.663929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.783 [2024-12-10 11:31:50.663940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.783 [2024-12-10 11:31:50.663970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.783 [2024-12-10 11:31:50.663982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:23.783 [2024-12-10 11:31:50.664004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.783 [2024-12-10 11:31:50.664014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.783 [2024-12-10 11:31:50.664123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.783 [2024-12-10 11:31:50.664136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:23.783 [2024-12-10 11:31:50.664147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.783 [2024-12-10 11:31:50.664158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.783 [2024-12-10 11:31:50.664195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.783 [2024-12-10 11:31:50.664208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:23.783 [2024-12-10 11:31:50.664218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.783 [2024-12-10 11:31:50.664237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.783 [2024-12-10 11:31:50.664275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.783 [2024-12-10 11:31:50.664288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:23.783 [2024-12-10 11:31:50.664298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.783 [2024-12-10 11:31:50.664308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.783 [2024-12-10 11:31:50.664351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.783 [2024-12-10 11:31:50.664364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:23.783 [2024-12-10 11:31:50.664381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.783 [2024-12-10 11:31:50.664392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.783 [2024-12-10 11:31:50.664541] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 495.391 ms, result 0 00:24:24.760 00:24:24.760 00:24:24.760 11:31:51 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78658 00:24:24.760 11:31:51 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:24:24.760 11:31:51 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78658 00:24:24.760 11:31:51 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78658 ']' 00:24:24.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.760 11:31:51 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.760 11:31:51 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.760 11:31:51 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.760 11:31:51 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.760 11:31:51 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:25.020 [2024-12-10 11:31:51.978437] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:25.020 [2024-12-10 11:31:51.978578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78658 ] 00:24:25.279 [2024-12-10 11:31:52.165933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.279 [2024-12-10 11:31:52.266900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.217 11:31:53 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.217 11:31:53 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:24:26.217 11:31:53 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:24:26.217 [2024-12-10 11:31:53.309190] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:26.217 [2024-12-10 11:31:53.309248] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:26.478 [2024-12-10 11:31:53.466866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.478 [2024-12-10 11:31:53.467108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:26.478 [2024-12-10 11:31:53.467139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:26.478 [2024-12-10 11:31:53.467151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.478 [2024-12-10 11:31:53.470204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.478 [2024-12-10 11:31:53.470380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:26.478 [2024-12-10 11:31:53.470406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:24:26.478 [2024-12-10 11:31:53.470419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.478 [2024-12-10 11:31:53.470583] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:26.478 [2024-12-10 11:31:53.471577] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:26.478 [2024-12-10 11:31:53.471620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.478 [2024-12-10 11:31:53.471632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:26.478 [2024-12-10 11:31:53.471654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:24:26.478 [2024-12-10 11:31:53.471664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.478 [2024-12-10 11:31:53.473316] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:26.478 [2024-12-10 11:31:53.491531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.478 [2024-12-10 11:31:53.491574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:26.478 [2024-12-10 11:31:53.491588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.248 ms 00:24:26.478 [2024-12-10 11:31:53.491600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.478 [2024-12-10 11:31:53.491695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.478 [2024-12-10 11:31:53.491710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:26.478 [2024-12-10 11:31:53.491721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:26.478 [2024-12-10 11:31:53.491733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.478 [2024-12-10 11:31:53.498652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.478 [2024-12-10 11:31:53.498690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:26.478 [2024-12-10 11:31:53.498702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.883 ms 00:24:26.478 [2024-12-10 11:31:53.498714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.478 [2024-12-10 11:31:53.498823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.478 [2024-12-10 11:31:53.498840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:26.478 [2024-12-10 11:31:53.498851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:26.478 [2024-12-10 11:31:53.498868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.478 [2024-12-10 11:31:53.498894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.478 [2024-12-10 11:31:53.498907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:26.478 [2024-12-10 11:31:53.498938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:26.478 [2024-12-10 11:31:53.498951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.478 [2024-12-10 11:31:53.498975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:26.478 [2024-12-10 11:31:53.503420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.478 [2024-12-10 11:31:53.503450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:26.478 [2024-12-10 11:31:53.503465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.455 ms 00:24:26.478 [2024-12-10 11:31:53.503475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.478 [2024-12-10 11:31:53.503546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.478 [2024-12-10 11:31:53.503557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:26.478 [2024-12-10 11:31:53.503570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:26.478 [2024-12-10 11:31:53.503583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.478 [2024-12-10 11:31:53.503614] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:26.478 [2024-12-10 11:31:53.503638] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:26.478 [2024-12-10 11:31:53.503683] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:26.478 [2024-12-10 11:31:53.503702] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:26.478 [2024-12-10 11:31:53.503792] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:26.478 [2024-12-10 11:31:53.503806] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:26.478 [2024-12-10 11:31:53.503824] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:26.478 [2024-12-10 11:31:53.503836] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:26.478 [2024-12-10 11:31:53.503850] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:26.478 [2024-12-10 11:31:53.503862] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:26.478 [2024-12-10 11:31:53.503875] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:26.478 [2024-12-10 11:31:53.503885] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:26.478 [2024-12-10 11:31:53.503900] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:26.478 [2024-12-10 11:31:53.503910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.478 [2024-12-10 11:31:53.503941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:26.478 [2024-12-10 11:31:53.503952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:24:26.478 [2024-12-10 11:31:53.503964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.478 [2024-12-10 11:31:53.504036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.478 [2024-12-10 11:31:53.504051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:26.478 [2024-12-10 11:31:53.504061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:26.478 [2024-12-10 11:31:53.504072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.478 [2024-12-10 11:31:53.504174] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:26.478 [2024-12-10 11:31:53.504190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:26.478 [2024-12-10 11:31:53.504201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:26.478 [2024-12-10 11:31:53.504214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.478 [2024-12-10 11:31:53.504225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:26.478 [2024-12-10 11:31:53.504238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:26.478 [2024-12-10 11:31:53.504247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:26.478 [2024-12-10 11:31:53.504262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:26.478 [2024-12-10 11:31:53.504272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:26.478 [2024-12-10 11:31:53.504283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:26.478 [2024-12-10 11:31:53.504293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:26.478 [2024-12-10 11:31:53.504305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:26.478 [2024-12-10 11:31:53.504315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:26.478 [2024-12-10 11:31:53.504326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:26.478 [2024-12-10 11:31:53.504335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:26.478 [2024-12-10 11:31:53.504346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.478 [2024-12-10 11:31:53.504355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:26.478 [2024-12-10 11:31:53.504367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:26.478 [2024-12-10 11:31:53.504385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.479 [2024-12-10 11:31:53.504398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:26.479 [2024-12-10 11:31:53.504408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:26.479 [2024-12-10 11:31:53.504419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.479 [2024-12-10 11:31:53.504428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:26.479 [2024-12-10 11:31:53.504442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:26.479 [2024-12-10 11:31:53.504466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.479 [2024-12-10 11:31:53.504478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:26.479 [2024-12-10 11:31:53.504489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:26.479 [2024-12-10 11:31:53.504501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.479 [2024-12-10 11:31:53.504510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:26.479 [2024-12-10 11:31:53.504522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:26.479 [2024-12-10 11:31:53.504531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.479 [2024-12-10 11:31:53.504543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:26.479 [2024-12-10 11:31:53.504552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:26.479 [2024-12-10 11:31:53.504564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:26.479 [2024-12-10 11:31:53.504573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:26.479 [2024-12-10 11:31:53.504585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:26.479 [2024-12-10 11:31:53.504594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:26.479 [2024-12-10 11:31:53.504605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:26.479 [2024-12-10 11:31:53.504615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:26.479 [2024-12-10 11:31:53.504629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.479 [2024-12-10 11:31:53.504638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:26.479 [2024-12-10 11:31:53.504650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:26.479 [2024-12-10 11:31:53.504659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.479 [2024-12-10 11:31:53.504671] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:26.479 [2024-12-10 11:31:53.504685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:26.479 [2024-12-10 11:31:53.504697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:26.479 [2024-12-10 11:31:53.504707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.479 [2024-12-10 11:31:53.504720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:26.479 [2024-12-10 11:31:53.504729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:26.479 [2024-12-10 11:31:53.504740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:26.479 [2024-12-10 11:31:53.504749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:26.479 [2024-12-10 11:31:53.504761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:26.479 [2024-12-10 11:31:53.504771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:26.479 [2024-12-10 11:31:53.504784] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:26.479 [2024-12-10 11:31:53.504796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:26.479 [2024-12-10 11:31:53.504814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:26.479 [2024-12-10 11:31:53.504825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:26.479 [2024-12-10 11:31:53.504838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:26.479 [2024-12-10 11:31:53.504849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:26.479 [2024-12-10 11:31:53.504862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:26.479 [2024-12-10 11:31:53.504873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:26.479 [2024-12-10 11:31:53.504885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:26.479 [2024-12-10 11:31:53.504896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:26.479 [2024-12-10 11:31:53.504908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:26.479 [2024-12-10 11:31:53.504919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:26.479 [2024-12-10 11:31:53.504931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:26.479 [2024-12-10 11:31:53.504941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:26.479 [2024-12-10 11:31:53.504964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:26.479 [2024-12-10 11:31:53.504976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:26.479 [2024-12-10 11:31:53.504989] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:26.479 [2024-12-10 11:31:53.505001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:26.479 [2024-12-10 11:31:53.505020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:26.479 [2024-12-10 11:31:53.505031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:26.479 [2024-12-10 11:31:53.505044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:26.479 [2024-12-10 11:31:53.505054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:26.479 [2024-12-10 11:31:53.505067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.479 [2024-12-10 11:31:53.505079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:26.479 [2024-12-10 11:31:53.505092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:24:26.479 [2024-12-10 11:31:53.505104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.479 [2024-12-10 11:31:53.543482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.479 [2024-12-10 11:31:53.543516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:26.479 [2024-12-10 11:31:53.543532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.368 ms 00:24:26.479 [2024-12-10 11:31:53.543545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.479 [2024-12-10 11:31:53.543653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.479 [2024-12-10 11:31:53.543665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:26.479 [2024-12-10 11:31:53.543678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:26.479 [2024-12-10 11:31:53.543688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.738 [2024-12-10 11:31:53.589436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.589475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:26.739 [2024-12-10 11:31:53.589491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.796 ms 00:24:26.739 [2024-12-10 11:31:53.589502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.739 [2024-12-10 11:31:53.589586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.589600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.739 [2024-12-10 11:31:53.589614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:26.739 [2024-12-10 11:31:53.589624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.739 [2024-12-10 11:31:53.590084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.590101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.739 [2024-12-10 11:31:53.590114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:24:26.739 [2024-12-10 11:31:53.590124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.739 [2024-12-10 11:31:53.590241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.590255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.739 [2024-12-10 11:31:53.590267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:24:26.739 [2024-12-10 11:31:53.590293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.739 [2024-12-10 11:31:53.611171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.611205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.739 [2024-12-10 11:31:53.611221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.883 ms 00:24:26.739 [2024-12-10 11:31:53.611232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.739 [2024-12-10 11:31:53.662859] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:26.739 [2024-12-10 11:31:53.662902] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:26.739 [2024-12-10 11:31:53.662936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.662948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:26.739 [2024-12-10 11:31:53.662962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.676 ms 00:24:26.739 [2024-12-10 11:31:53.662983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.739 [2024-12-10 11:31:53.691155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.691193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:26.739 [2024-12-10 11:31:53.691210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.124 ms 00:24:26.739 [2024-12-10 11:31:53.691220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.739 [2024-12-10 11:31:53.708208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.708394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:26.739 [2024-12-10 11:31:53.708424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.915 ms 00:24:26.739 [2024-12-10 11:31:53.708435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.739 [2024-12-10 11:31:53.726006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.726053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:26.739 [2024-12-10 11:31:53.726069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.518 ms 00:24:26.739 [2024-12-10 11:31:53.726078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.739 [2024-12-10 11:31:53.726742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.726768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:26.739 [2024-12-10 11:31:53.726783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:24:26.739 [2024-12-10 11:31:53.726792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.739 [2024-12-10 11:31:53.807232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.807456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:26.739 [2024-12-10 11:31:53.807485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.538 ms 00:24:26.739 [2024-12-10 11:31:53.807497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.739 [2024-12-10 11:31:53.817613] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:26.739 [2024-12-10 11:31:53.832804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.832853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:26.739 [2024-12-10 11:31:53.832871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.205 ms 00:24:26.739 [2024-12-10 11:31:53.832883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.739 [2024-12-10 11:31:53.832983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.833000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:26.739 [2024-12-10 11:31:53.833012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:26.739 [2024-12-10 11:31:53.833025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.739 [2024-12-10 11:31:53.833088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.833103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:26.739 [2024-12-10 11:31:53.833113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:26.739 [2024-12-10 11:31:53.833129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.739 [2024-12-10 11:31:53.833154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.833167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:26.739 [2024-12-10 11:31:53.833177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:26.739 [2024-12-10 11:31:53.833190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.739 [2024-12-10 11:31:53.833227] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:26.739 [2024-12-10 11:31:53.833245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.739 [2024-12-10 11:31:53.833259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:26.739 [2024-12-10 11:31:53.833273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:26.739 [2024-12-10 11:31:53.833283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.998 [2024-12-10 11:31:53.866738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.998 [2024-12-10 11:31:53.866779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:26.998 [2024-12-10 11:31:53.866795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.478 ms 00:24:26.998 [2024-12-10 11:31:53.866806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.998 [2024-12-10 11:31:53.866913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.998 [2024-12-10 11:31:53.866950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:26.998 [2024-12-10 11:31:53.866964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:26.998 [2024-12-10 11:31:53.866977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.998 [2024-12-10 11:31:53.867975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:26.998 [2024-12-10 11:31:53.872174] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.445 ms, result 0 00:24:26.998 [2024-12-10 11:31:53.873765] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:26.998 Some configs were skipped because the RPC state that can call them passed over. 00:24:26.998 11:31:53 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:24:26.998 [2024-12-10 11:31:54.108705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.998 [2024-12-10 11:31:54.108768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:26.998 [2024-12-10 11:31:54.108782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.632 ms 00:24:26.998 [2024-12-10 11:31:54.108796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.998 [2024-12-10 11:31:54.108830] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.757 ms, result 0 00:24:27.257 true 00:24:27.257 11:31:54 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:24:27.257 [2024-12-10 11:31:54.322281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.257 [2024-12-10 11:31:54.322450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:27.257 [2024-12-10 11:31:54.322477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:24:27.257 [2024-12-10 11:31:54.322489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.257 [2024-12-10 11:31:54.322537] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.455 ms, result 0 00:24:27.257 true 00:24:27.257 11:31:54 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78658 00:24:27.257 11:31:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78658 ']' 00:24:27.257 11:31:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78658 00:24:27.257 11:31:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:24:27.257 11:31:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.257 11:31:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78658 00:24:27.517 killing process with pid 78658 00:24:27.517 11:31:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.517 11:31:54 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.517 11:31:54 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78658' 00:24:27.517 11:31:54 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78658 00:24:27.517 11:31:54 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78658 00:24:28.457 [2024-12-10 11:31:55.422341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.457 [2024-12-10 11:31:55.422402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:28.457 [2024-12-10 11:31:55.422417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:28.457 [2024-12-10 11:31:55.422430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.457 [2024-12-10 11:31:55.422456] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:28.457 [2024-12-10 11:31:55.426565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.457 [2024-12-10 11:31:55.426601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:28.457 [2024-12-10 11:31:55.426619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.093 ms 00:24:28.457 [2024-12-10 11:31:55.426629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.457 [2024-12-10 11:31:55.426882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.457 [2024-12-10 11:31:55.426896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:28.457 [2024-12-10 11:31:55.426909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:24:28.457 [2024-12-10 11:31:55.426931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.457 [2024-12-10 11:31:55.430260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.457 [2024-12-10 11:31:55.430297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:28.457 [2024-12-10 11:31:55.430314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.309 ms 00:24:28.457 [2024-12-10 11:31:55.430325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.457 [2024-12-10 11:31:55.435829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.457 [2024-12-10 11:31:55.435865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:28.457 [2024-12-10 11:31:55.435881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.470 ms 00:24:28.457 [2024-12-10 11:31:55.435891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.457 [2024-12-10 11:31:55.450452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.457 [2024-12-10 11:31:55.450499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:28.457 [2024-12-10 11:31:55.450518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.509 ms 00:24:28.457 [2024-12-10 11:31:55.450529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.457 [2024-12-10 11:31:55.461477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.457 [2024-12-10 11:31:55.461658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:28.457 [2024-12-10 11:31:55.461685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.908 ms 00:24:28.457 [2024-12-10 11:31:55.461696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.457 [2024-12-10 11:31:55.461881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.457 [2024-12-10 11:31:55.461896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:28.457 [2024-12-10 11:31:55.461910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:24:28.457 [2024-12-10 11:31:55.461940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.457 [2024-12-10 11:31:55.477070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.457 [2024-12-10 11:31:55.477106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:28.457 [2024-12-10 11:31:55.477122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.116 ms 00:24:28.457 [2024-12-10 11:31:55.477131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.457 [2024-12-10 11:31:55.491729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.457 [2024-12-10 11:31:55.491911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:28.457 [2024-12-10 11:31:55.491949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.569 ms 00:24:28.457 [2024-12-10 11:31:55.491959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.457 [2024-12-10 11:31:55.505905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.457 [2024-12-10 11:31:55.505945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:28.457 [2024-12-10 11:31:55.505960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.909 ms 00:24:28.457 [2024-12-10 11:31:55.505969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.457 [2024-12-10 11:31:55.519804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.457 [2024-12-10 11:31:55.519836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:28.457 [2024-12-10 11:31:55.519850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.781 ms 00:24:28.457 [2024-12-10 11:31:55.519860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.457 [2024-12-10 11:31:55.519941] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:28.457 [2024-12-10 11:31:55.519975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:28.457 [2024-12-10 11:31:55.519991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:28.457 [2024-12-10 11:31:55.520003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:28.457 [2024-12-10 11:31:55.520017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:28.457 [2024-12-10 11:31:55.520028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:28.457 [2024-12-10 11:31:55.520045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:28.457 [2024-12-10 11:31:55.520056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:28.457 [2024-12-10 11:31:55.520069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:28.457 [2024-12-10 11:31:55.520080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:28.457 [2024-12-10 11:31:55.520094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.520997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:28.458 [2024-12-10 11:31:55.521246] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:28.458 [2024-12-10 11:31:55.521265] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a8cf975-d8b5-43ec-a656-efac3c3b89a7 00:24:28.458 [2024-12-10 11:31:55.521279] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:28.459 [2024-12-10 11:31:55.521292] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:28.459 [2024-12-10 11:31:55.521302] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:28.459 [2024-12-10 11:31:55.521315] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:28.459 [2024-12-10 11:31:55.521325] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:28.459 [2024-12-10 11:31:55.521338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:28.459 [2024-12-10 11:31:55.521348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:28.459 [2024-12-10 11:31:55.521359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:28.459 [2024-12-10 11:31:55.521368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:28.459 [2024-12-10 11:31:55.521381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.459 [2024-12-10 11:31:55.521392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:28.459 [2024-12-10 11:31:55.521405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:24:28.459 [2024-12-10 11:31:55.521424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.459 [2024-12-10 11:31:55.540897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.459 [2024-12-10 11:31:55.541090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:28.459 [2024-12-10 11:31:55.541120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.475 ms 00:24:28.459 [2024-12-10 11:31:55.541132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.459 [2024-12-10 11:31:55.541708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.459 [2024-12-10 11:31:55.541728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:28.459 [2024-12-10 11:31:55.541745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:24:28.459 [2024-12-10 11:31:55.541756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.718 [2024-12-10 11:31:55.607245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.718 [2024-12-10 11:31:55.607409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:28.718 [2024-12-10 11:31:55.607439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.718 [2024-12-10 11:31:55.607451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.718 [2024-12-10 11:31:55.607544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.718 [2024-12-10 11:31:55.607557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:28.718 [2024-12-10 11:31:55.607580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.718 [2024-12-10 11:31:55.607591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.718 [2024-12-10 11:31:55.607647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.718 [2024-12-10 11:31:55.607661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:28.718 [2024-12-10 11:31:55.607681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.718 [2024-12-10 11:31:55.607692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.718 [2024-12-10 11:31:55.607716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.718 [2024-12-10 11:31:55.607727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:28.718 [2024-12-10 11:31:55.607744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.718 [2024-12-10 11:31:55.607759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.718 [2024-12-10 11:31:55.724305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.718 [2024-12-10 11:31:55.724353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:28.718 [2024-12-10 11:31:55.724371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.718 [2024-12-10 11:31:55.724382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.718 [2024-12-10 11:31:55.818301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.718 [2024-12-10 11:31:55.818346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:28.718 [2024-12-10 11:31:55.818363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.718 [2024-12-10 11:31:55.818376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.718 [2024-12-10 11:31:55.818455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.718 [2024-12-10 11:31:55.818467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:28.718 [2024-12-10 11:31:55.818484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.718 [2024-12-10 11:31:55.818493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.718 [2024-12-10 11:31:55.818524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.718 [2024-12-10 11:31:55.818535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:28.718 [2024-12-10 11:31:55.818548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.718 [2024-12-10 11:31:55.818557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.718 [2024-12-10 11:31:55.818662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.718 [2024-12-10 11:31:55.818676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:28.718 [2024-12-10 11:31:55.818688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.718 [2024-12-10 11:31:55.818698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.718 [2024-12-10 11:31:55.818736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.718 [2024-12-10 11:31:55.818748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:28.718 [2024-12-10 11:31:55.818762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.718 [2024-12-10 11:31:55.818771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.718 [2024-12-10 11:31:55.818815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.718 [2024-12-10 11:31:55.818826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:28.718 [2024-12-10 11:31:55.818840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.718 [2024-12-10 11:31:55.818850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.718 [2024-12-10 11:31:55.818896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.718 [2024-12-10 11:31:55.818907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:28.718 [2024-12-10 11:31:55.818945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.718 [2024-12-10 11:31:55.818957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.718 [2024-12-10 11:31:55.819130] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 397.385 ms, result 0 00:24:30.098 11:31:56 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:30.098 11:31:56 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:30.098 [2024-12-10 11:31:56.887519] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:30.098 [2024-12-10 11:31:56.887648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78727 ] 00:24:30.098 [2024-12-10 11:31:57.072618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.098 [2024-12-10 11:31:57.180443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.666 [2024-12-10 11:31:57.524563] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:30.666 [2024-12-10 11:31:57.524631] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:30.666 [2024-12-10 11:31:57.687123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.666 [2024-12-10 11:31:57.687170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:30.666 [2024-12-10 11:31:57.687186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:30.666 [2024-12-10 11:31:57.687197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.666 [2024-12-10 11:31:57.690460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.666 [2024-12-10 11:31:57.690620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:30.666 [2024-12-10 11:31:57.690657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.247 ms 00:24:30.666 [2024-12-10 11:31:57.690669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.666 [2024-12-10 11:31:57.690772] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:30.666 [2024-12-10 11:31:57.691805] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:30.666 [2024-12-10 11:31:57.691840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.666 [2024-12-10 11:31:57.691851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:30.666 [2024-12-10 11:31:57.691863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:24:30.666 [2024-12-10 11:31:57.691874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.666 [2024-12-10 11:31:57.693366] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:30.666 [2024-12-10 11:31:57.711647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.666 [2024-12-10 11:31:57.711683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:30.666 [2024-12-10 11:31:57.711698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.311 ms 00:24:30.666 [2024-12-10 11:31:57.711708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.666 [2024-12-10 11:31:57.711810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.667 [2024-12-10 11:31:57.711824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:30.667 [2024-12-10 11:31:57.711836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:30.667 [2024-12-10 11:31:57.711846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.667 [2024-12-10 11:31:57.718753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.667 [2024-12-10 11:31:57.718785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:30.667 [2024-12-10 11:31:57.718797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.876 ms 00:24:30.667 [2024-12-10 11:31:57.718808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.667 [2024-12-10 11:31:57.718907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.667 [2024-12-10 11:31:57.718940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:30.667 [2024-12-10 11:31:57.718952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:30.667 [2024-12-10 11:31:57.718962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.667 [2024-12-10 11:31:57.718995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.667 [2024-12-10 11:31:57.719007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:30.667 [2024-12-10 11:31:57.719017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:30.667 [2024-12-10 11:31:57.719028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.667 [2024-12-10 11:31:57.719052] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:30.667 [2024-12-10 11:31:57.723891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.667 [2024-12-10 11:31:57.723930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:30.667 [2024-12-10 11:31:57.723943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.853 ms 00:24:30.667 [2024-12-10 11:31:57.723953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.667 [2024-12-10 11:31:57.724027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.667 [2024-12-10 11:31:57.724040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:30.667 [2024-12-10 11:31:57.724051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:30.667 [2024-12-10 11:31:57.724061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.667 [2024-12-10 11:31:57.724087] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:30.667 [2024-12-10 11:31:57.724111] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:30.667 [2024-12-10 11:31:57.724146] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:30.667 [2024-12-10 11:31:57.724163] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:30.667 [2024-12-10 11:31:57.724252] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:30.667 [2024-12-10 11:31:57.724265] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:30.667 [2024-12-10 11:31:57.724278] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:30.667 [2024-12-10 11:31:57.724295] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:30.667 [2024-12-10 11:31:57.724307] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:30.667 [2024-12-10 11:31:57.724318] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:30.667 [2024-12-10 11:31:57.724328] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:30.667 [2024-12-10 11:31:57.724338] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:30.667 [2024-12-10 11:31:57.724348] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:30.667 [2024-12-10 11:31:57.724359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.667 [2024-12-10 11:31:57.724370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:30.667 [2024-12-10 11:31:57.724380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:24:30.667 [2024-12-10 11:31:57.724390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.667 [2024-12-10 11:31:57.724465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.667 [2024-12-10 11:31:57.724480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:30.667 [2024-12-10 11:31:57.724490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:30.667 [2024-12-10 11:31:57.724500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.667 [2024-12-10 11:31:57.724587] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:30.667 [2024-12-10 11:31:57.724605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:30.667 [2024-12-10 11:31:57.724617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:30.667 [2024-12-10 11:31:57.724628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.667 [2024-12-10 11:31:57.724638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:30.667 [2024-12-10 11:31:57.724648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:30.667 [2024-12-10 11:31:57.724658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:30.667 [2024-12-10 11:31:57.724668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:30.667 [2024-12-10 11:31:57.724677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:30.667 [2024-12-10 11:31:57.724687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:30.667 [2024-12-10 11:31:57.724696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:30.667 [2024-12-10 11:31:57.724717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:30.667 [2024-12-10 11:31:57.724727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:30.667 [2024-12-10 11:31:57.724737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:30.667 [2024-12-10 11:31:57.724746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:30.667 [2024-12-10 11:31:57.724755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.667 [2024-12-10 11:31:57.724765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:30.667 [2024-12-10 11:31:57.724775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:30.667 [2024-12-10 11:31:57.724784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.667 [2024-12-10 11:31:57.724794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:30.667 [2024-12-10 11:31:57.724803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:30.667 [2024-12-10 11:31:57.724813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.667 [2024-12-10 11:31:57.724822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:30.667 [2024-12-10 11:31:57.724832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:30.667 [2024-12-10 11:31:57.724841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.667 [2024-12-10 11:31:57.724851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:30.667 [2024-12-10 11:31:57.724860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:30.667 [2024-12-10 11:31:57.724869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.667 [2024-12-10 11:31:57.724878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:30.667 [2024-12-10 11:31:57.724888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:30.667 [2024-12-10 11:31:57.724897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.667 [2024-12-10 11:31:57.724906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:30.667 [2024-12-10 11:31:57.724927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:30.667 [2024-12-10 11:31:57.724937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:30.667 [2024-12-10 11:31:57.724947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:30.667 [2024-12-10 11:31:57.724957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:30.667 [2024-12-10 11:31:57.724966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:30.667 [2024-12-10 11:31:57.724976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:30.667 [2024-12-10 11:31:57.724985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:30.667 [2024-12-10 11:31:57.724994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.667 [2024-12-10 11:31:57.725004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:30.667 [2024-12-10 11:31:57.725013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:30.667 [2024-12-10 11:31:57.725023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.667 [2024-12-10 11:31:57.725034] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:30.667 [2024-12-10 11:31:57.725045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:30.667 [2024-12-10 11:31:57.725059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:30.667 [2024-12-10 11:31:57.725069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.667 [2024-12-10 11:31:57.725079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:30.667 [2024-12-10 11:31:57.725089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:30.667 [2024-12-10 11:31:57.725099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:30.667 [2024-12-10 11:31:57.725109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:30.667 [2024-12-10 11:31:57.725119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:30.667 [2024-12-10 11:31:57.725128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:30.667 [2024-12-10 11:31:57.725140] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:30.667 [2024-12-10 11:31:57.725152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:30.667 [2024-12-10 11:31:57.725164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:30.667 [2024-12-10 11:31:57.725176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:30.667 [2024-12-10 11:31:57.725186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:30.667 [2024-12-10 11:31:57.725197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:30.667 [2024-12-10 11:31:57.725207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:30.668 [2024-12-10 11:31:57.725218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:30.668 [2024-12-10 11:31:57.725228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:30.668 [2024-12-10 11:31:57.725238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:30.668 [2024-12-10 11:31:57.725249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:30.668 [2024-12-10 11:31:57.725260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:30.668 [2024-12-10 11:31:57.725270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:30.668 [2024-12-10 11:31:57.725280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:30.668 [2024-12-10 11:31:57.725290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:30.668 [2024-12-10 11:31:57.725301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:30.668 [2024-12-10 11:31:57.725310] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:30.668 [2024-12-10 11:31:57.725322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:30.668 [2024-12-10 11:31:57.725333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:30.668 [2024-12-10 11:31:57.725344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:30.668 [2024-12-10 11:31:57.725354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:30.668 [2024-12-10 11:31:57.725365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:30.668 [2024-12-10 11:31:57.725376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.668 [2024-12-10 11:31:57.725391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:30.668 [2024-12-10 11:31:57.725401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:24:30.668 [2024-12-10 11:31:57.725421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.668 [2024-12-10 11:31:57.764155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.668 [2024-12-10 11:31:57.764341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:30.668 [2024-12-10 11:31:57.764365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.738 ms 00:24:30.668 [2024-12-10 11:31:57.764376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.668 [2024-12-10 11:31:57.764504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.668 [2024-12-10 11:31:57.764517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:30.668 [2024-12-10 11:31:57.764529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:30.668 [2024-12-10 11:31:57.764540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.927 [2024-12-10 11:31:57.821148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.927 [2024-12-10 11:31:57.821185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:30.927 [2024-12-10 11:31:57.821202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.677 ms 00:24:30.927 [2024-12-10 11:31:57.821213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.927 [2024-12-10 11:31:57.821308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.927 [2024-12-10 11:31:57.821321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:30.927 [2024-12-10 11:31:57.821332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:30.927 [2024-12-10 11:31:57.821342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.927 [2024-12-10 11:31:57.821792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.927 [2024-12-10 11:31:57.821807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:30.927 [2024-12-10 11:31:57.821822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:24:30.927 [2024-12-10 11:31:57.821833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.927 [2024-12-10 11:31:57.821982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.928 [2024-12-10 11:31:57.821997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:30.928 [2024-12-10 11:31:57.822008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:24:30.928 [2024-12-10 11:31:57.822018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.928 [2024-12-10 11:31:57.842043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.928 [2024-12-10 11:31:57.842077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:30.928 [2024-12-10 11:31:57.842089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.033 ms 00:24:30.928 [2024-12-10 11:31:57.842115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.928 [2024-12-10 11:31:57.860772] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:30.928 [2024-12-10 11:31:57.860809] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:30.928 [2024-12-10 11:31:57.860823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.928 [2024-12-10 11:31:57.860850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:30.928 [2024-12-10 11:31:57.860862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.629 ms 00:24:30.928 [2024-12-10 11:31:57.860872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.928 [2024-12-10 11:31:57.889745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.928 [2024-12-10 11:31:57.889878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:30.928 [2024-12-10 11:31:57.889901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.826 ms 00:24:30.928 [2024-12-10 11:31:57.889913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.928 [2024-12-10 11:31:57.907815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.928 [2024-12-10 11:31:57.907851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:30.928 [2024-12-10 11:31:57.907863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.817 ms 00:24:30.928 [2024-12-10 11:31:57.907888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.928 [2024-12-10 11:31:57.924998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.928 [2024-12-10 11:31:57.925035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:30.928 [2024-12-10 11:31:57.925049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.044 ms 00:24:30.928 [2024-12-10 11:31:57.925059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.928 [2024-12-10 11:31:57.925796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.928 [2024-12-10 11:31:57.925820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:30.928 [2024-12-10 11:31:57.925832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:24:30.928 [2024-12-10 11:31:57.925842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.928 [2024-12-10 11:31:58.009535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.928 [2024-12-10 11:31:58.009748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:30.928 [2024-12-10 11:31:58.009791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.784 ms 00:24:30.928 [2024-12-10 11:31:58.009803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.928 [2024-12-10 11:31:58.019851] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:30.928 [2024-12-10 11:31:58.035278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.928 [2024-12-10 11:31:58.035481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:30.928 [2024-12-10 11:31:58.035524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.402 ms 00:24:30.928 [2024-12-10 11:31:58.035543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.928 [2024-12-10 11:31:58.035664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.928 [2024-12-10 11:31:58.035678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:30.928 [2024-12-10 11:31:58.035690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:30.928 [2024-12-10 11:31:58.035701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.928 [2024-12-10 11:31:58.035755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.928 [2024-12-10 11:31:58.035767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:30.928 [2024-12-10 11:31:58.035778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:30.928 [2024-12-10 11:31:58.035792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.928 [2024-12-10 11:31:58.035825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.928 [2024-12-10 11:31:58.035838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:30.928 [2024-12-10 11:31:58.035849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:30.928 [2024-12-10 11:31:58.035859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.928 [2024-12-10 11:31:58.035897] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:30.928 [2024-12-10 11:31:58.035910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.928 [2024-12-10 11:31:58.035920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:30.928 [2024-12-10 11:31:58.035950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:30.928 [2024-12-10 11:31:58.035962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.187 [2024-12-10 11:31:58.070019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.187 [2024-12-10 11:31:58.070059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:31.187 [2024-12-10 11:31:58.070073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.089 ms 00:24:31.187 [2024-12-10 11:31:58.070100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.187 [2024-12-10 11:31:58.070208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.187 [2024-12-10 11:31:58.070222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:31.187 [2024-12-10 11:31:58.070233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:31.187 [2024-12-10 11:31:58.070244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.187 [2024-12-10 11:31:58.071165] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:31.187 [2024-12-10 11:31:58.075295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 384.346 ms, result 0 00:24:31.187 [2024-12-10 11:31:58.076154] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:31.187 [2024-12-10 11:31:58.094302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:32.125  [2024-12-10T11:32:00.176Z] Copying: 26/256 [MB] (26 MBps) [2024-12-10T11:32:01.114Z] Copying: 50/256 [MB] (23 MBps) [2024-12-10T11:32:02.493Z] Copying: 75/256 [MB] (24 MBps) [2024-12-10T11:32:03.430Z] Copying: 99/256 [MB] (24 MBps) [2024-12-10T11:32:04.367Z] Copying: 123/256 [MB] (23 MBps) [2024-12-10T11:32:05.304Z] Copying: 148/256 [MB] (24 MBps) [2024-12-10T11:32:06.240Z] Copying: 171/256 [MB] (23 MBps) [2024-12-10T11:32:07.176Z] Copying: 195/256 [MB] (24 MBps) [2024-12-10T11:32:08.114Z] Copying: 220/256 [MB] (24 MBps) [2024-12-10T11:32:08.681Z] Copying: 244/256 [MB] (24 MBps) [2024-12-10T11:32:08.681Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-10 11:32:08.523334] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:41.567 [2024-12-10 11:32:08.537531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.567 [2024-12-10 11:32:08.537571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:41.567 [2024-12-10 11:32:08.537607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:41.567 [2024-12-10 11:32:08.537617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.567 [2024-12-10 11:32:08.537640] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:41.567 [2024-12-10 11:32:08.541692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.567 [2024-12-10 11:32:08.541719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:41.567 [2024-12-10 11:32:08.541731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.042 ms 00:24:41.567 [2024-12-10 11:32:08.541740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.567 [2024-12-10 11:32:08.541976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.567 [2024-12-10 11:32:08.541990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:41.567 [2024-12-10 11:32:08.542001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:24:41.567 [2024-12-10 11:32:08.542011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.567 [2024-12-10 11:32:08.544840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.567 [2024-12-10 11:32:08.545444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:41.567 [2024-12-10 11:32:08.545468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.812 ms 00:24:41.567 [2024-12-10 11:32:08.545479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.567 [2024-12-10 11:32:08.550985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.567 [2024-12-10 11:32:08.551014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:41.567 [2024-12-10 11:32:08.551024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.485 ms 00:24:41.567 [2024-12-10 11:32:08.551034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.567 [2024-12-10 11:32:08.584845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.567 [2024-12-10 11:32:08.584880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:41.567 [2024-12-10 11:32:08.584894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.812 ms 00:24:41.567 [2024-12-10 11:32:08.584903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.567 [2024-12-10 11:32:08.604931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.567 [2024-12-10 11:32:08.604967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:41.567 [2024-12-10 11:32:08.604987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.981 ms 00:24:41.567 [2024-12-10 11:32:08.604997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.567 [2024-12-10 11:32:08.605123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.567 [2024-12-10 11:32:08.605136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:41.567 [2024-12-10 11:32:08.605155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:41.567 [2024-12-10 11:32:08.605165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.567 [2024-12-10 11:32:08.639752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.567 [2024-12-10 11:32:08.639787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:41.567 [2024-12-10 11:32:08.639799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.626 ms 00:24:41.567 [2024-12-10 11:32:08.639823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.567 [2024-12-10 11:32:08.674324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.567 [2024-12-10 11:32:08.674464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:41.567 [2024-12-10 11:32:08.674482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.500 ms 00:24:41.567 [2024-12-10 11:32:08.674508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.827 [2024-12-10 11:32:08.707247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.827 [2024-12-10 11:32:08.707282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:41.827 [2024-12-10 11:32:08.707295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.721 ms 00:24:41.827 [2024-12-10 11:32:08.707304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.827 [2024-12-10 11:32:08.740984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.827 [2024-12-10 11:32:08.741020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:41.827 [2024-12-10 11:32:08.741032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.660 ms 00:24:41.827 [2024-12-10 11:32:08.741057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.827 [2024-12-10 11:32:08.741113] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:41.827 [2024-12-10 11:32:08.741130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:41.827 [2024-12-10 11:32:08.741143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:41.827 [2024-12-10 11:32:08.741153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:41.827 [2024-12-10 11:32:08.741164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:41.827 [2024-12-10 11:32:08.741175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:41.827 [2024-12-10 11:32:08.741185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:41.827 [2024-12-10 11:32:08.741196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:41.827 [2024-12-10 11:32:08.741207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:41.827 [2024-12-10 11:32:08.741217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:41.827 [2024-12-10 11:32:08.741228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:41.827 [2024-12-10 11:32:08.741239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:41.827 [2024-12-10 11:32:08.741249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:41.827 [2024-12-10 11:32:08.741259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.741991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:41.828 [2024-12-10 11:32:08.742231] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:41.828 [2024-12-10 11:32:08.742242] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a8cf975-d8b5-43ec-a656-efac3c3b89a7 00:24:41.828 [2024-12-10 11:32:08.742253] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:41.829 [2024-12-10 11:32:08.742263] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:41.829 [2024-12-10 11:32:08.742273] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:41.829 [2024-12-10 11:32:08.742283] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:41.829 [2024-12-10 11:32:08.742292] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:41.829 [2024-12-10 11:32:08.742302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:41.829 [2024-12-10 11:32:08.742317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:41.829 [2024-12-10 11:32:08.742326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:41.829 [2024-12-10 11:32:08.742335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:41.829 [2024-12-10 11:32:08.742344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.829 [2024-12-10 11:32:08.742354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:41.829 [2024-12-10 11:32:08.742365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:24:41.829 [2024-12-10 11:32:08.742375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.829 [2024-12-10 11:32:08.761571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.829 [2024-12-10 11:32:08.761607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:41.829 [2024-12-10 11:32:08.761619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.207 ms 00:24:41.829 [2024-12-10 11:32:08.761629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.829 [2024-12-10 11:32:08.762352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.829 [2024-12-10 11:32:08.762476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:41.829 [2024-12-10 11:32:08.762553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:24:41.829 [2024-12-10 11:32:08.762589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.829 [2024-12-10 11:32:08.813026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.829 [2024-12-10 11:32:08.813156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:41.829 [2024-12-10 11:32:08.813176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.829 [2024-12-10 11:32:08.813209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.829 [2024-12-10 11:32:08.813284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.829 [2024-12-10 11:32:08.813295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:41.829 [2024-12-10 11:32:08.813306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.829 [2024-12-10 11:32:08.813316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.829 [2024-12-10 11:32:08.813366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.829 [2024-12-10 11:32:08.813378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:41.829 [2024-12-10 11:32:08.813389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.829 [2024-12-10 11:32:08.813399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.829 [2024-12-10 11:32:08.813431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.829 [2024-12-10 11:32:08.813442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:41.829 [2024-12-10 11:32:08.813453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.829 [2024-12-10 11:32:08.813463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.829 [2024-12-10 11:32:08.929770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.829 [2024-12-10 11:32:08.929822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:41.829 [2024-12-10 11:32:08.929836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.829 [2024-12-10 11:32:08.929846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.088 [2024-12-10 11:32:09.023362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.088 [2024-12-10 11:32:09.023410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:42.088 [2024-12-10 11:32:09.023424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.088 [2024-12-10 11:32:09.023434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.088 [2024-12-10 11:32:09.023497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.088 [2024-12-10 11:32:09.023509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:42.089 [2024-12-10 11:32:09.023519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.089 [2024-12-10 11:32:09.023529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.089 [2024-12-10 11:32:09.023556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.089 [2024-12-10 11:32:09.023573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:42.089 [2024-12-10 11:32:09.023583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.089 [2024-12-10 11:32:09.023592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.089 [2024-12-10 11:32:09.023688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.089 [2024-12-10 11:32:09.023700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:42.089 [2024-12-10 11:32:09.023711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.089 [2024-12-10 11:32:09.023720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.089 [2024-12-10 11:32:09.023755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.089 [2024-12-10 11:32:09.023766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:42.089 [2024-12-10 11:32:09.023780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.089 [2024-12-10 11:32:09.023790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.089 [2024-12-10 11:32:09.023830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.089 [2024-12-10 11:32:09.023841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:42.089 [2024-12-10 11:32:09.023851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.089 [2024-12-10 11:32:09.023860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.089 [2024-12-10 11:32:09.023902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.089 [2024-12-10 11:32:09.023938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:42.089 [2024-12-10 11:32:09.023949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.089 [2024-12-10 11:32:09.023975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.089 [2024-12-10 11:32:09.024128] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 487.387 ms, result 0 00:24:43.027 00:24:43.027 00:24:43.027 11:32:10 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:24:43.027 11:32:10 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:43.595 11:32:10 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:43.595 [2024-12-10 11:32:10.549388] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:43.595 [2024-12-10 11:32:10.549509] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78865 ] 00:24:43.854 [2024-12-10 11:32:10.729669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.854 [2024-12-10 11:32:10.835282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.113 [2024-12-10 11:32:11.174383] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:44.113 [2024-12-10 11:32:11.174448] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:44.374 [2024-12-10 11:32:11.335030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.374 [2024-12-10 11:32:11.335075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:44.374 [2024-12-10 11:32:11.335090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:44.374 [2024-12-10 11:32:11.335100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.374 [2024-12-10 11:32:11.338142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.374 [2024-12-10 11:32:11.338317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:44.374 [2024-12-10 11:32:11.338337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.028 ms 00:24:44.374 [2024-12-10 11:32:11.338349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.374 [2024-12-10 11:32:11.338442] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:44.374 [2024-12-10 11:32:11.339517] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:44.374 [2024-12-10 11:32:11.339552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.374 [2024-12-10 11:32:11.339563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:44.374 [2024-12-10 11:32:11.339574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:24:44.374 [2024-12-10 11:32:11.339584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.374 [2024-12-10 11:32:11.341078] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:44.374 [2024-12-10 11:32:11.358499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.374 [2024-12-10 11:32:11.358536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:44.374 [2024-12-10 11:32:11.358550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.450 ms 00:24:44.374 [2024-12-10 11:32:11.358575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.374 [2024-12-10 11:32:11.358675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.374 [2024-12-10 11:32:11.358690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:44.374 [2024-12-10 11:32:11.358701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:44.374 [2024-12-10 11:32:11.358710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.374 [2024-12-10 11:32:11.365516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.374 [2024-12-10 11:32:11.365542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:44.374 [2024-12-10 11:32:11.365553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.775 ms 00:24:44.374 [2024-12-10 11:32:11.365578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.374 [2024-12-10 11:32:11.365675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.374 [2024-12-10 11:32:11.365689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:44.374 [2024-12-10 11:32:11.365700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:44.374 [2024-12-10 11:32:11.365710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.374 [2024-12-10 11:32:11.365741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.374 [2024-12-10 11:32:11.365752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:44.374 [2024-12-10 11:32:11.365762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:44.374 [2024-12-10 11:32:11.365772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.374 [2024-12-10 11:32:11.365794] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:44.374 [2024-12-10 11:32:11.370341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.374 [2024-12-10 11:32:11.370374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:44.374 [2024-12-10 11:32:11.370386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.560 ms 00:24:44.374 [2024-12-10 11:32:11.370396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.374 [2024-12-10 11:32:11.370460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.374 [2024-12-10 11:32:11.370472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:44.374 [2024-12-10 11:32:11.370483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:44.374 [2024-12-10 11:32:11.370491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.374 [2024-12-10 11:32:11.370514] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:44.374 [2024-12-10 11:32:11.370537] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:44.374 [2024-12-10 11:32:11.370571] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:44.374 [2024-12-10 11:32:11.370588] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:44.374 [2024-12-10 11:32:11.370671] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:44.374 [2024-12-10 11:32:11.370683] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:44.374 [2024-12-10 11:32:11.370696] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:44.374 [2024-12-10 11:32:11.370712] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:44.374 [2024-12-10 11:32:11.370723] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:44.374 [2024-12-10 11:32:11.370734] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:44.374 [2024-12-10 11:32:11.370743] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:44.374 [2024-12-10 11:32:11.370752] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:44.374 [2024-12-10 11:32:11.370761] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:44.374 [2024-12-10 11:32:11.370771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.374 [2024-12-10 11:32:11.370781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:44.374 [2024-12-10 11:32:11.370791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:24:44.374 [2024-12-10 11:32:11.370800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.375 [2024-12-10 11:32:11.370870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.375 [2024-12-10 11:32:11.370885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:44.375 [2024-12-10 11:32:11.370895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:44.375 [2024-12-10 11:32:11.370904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.375 [2024-12-10 11:32:11.371029] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:44.375 [2024-12-10 11:32:11.371045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:44.375 [2024-12-10 11:32:11.371056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:44.375 [2024-12-10 11:32:11.371066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.375 [2024-12-10 11:32:11.371076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:44.375 [2024-12-10 11:32:11.371086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:44.375 [2024-12-10 11:32:11.371095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:44.375 [2024-12-10 11:32:11.371106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:44.375 [2024-12-10 11:32:11.371116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:44.375 [2024-12-10 11:32:11.371125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:44.375 [2024-12-10 11:32:11.371135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:44.375 [2024-12-10 11:32:11.371153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:44.375 [2024-12-10 11:32:11.371163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:44.375 [2024-12-10 11:32:11.371172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:44.375 [2024-12-10 11:32:11.371181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:44.375 [2024-12-10 11:32:11.371191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.375 [2024-12-10 11:32:11.371200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:44.375 [2024-12-10 11:32:11.371209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:44.375 [2024-12-10 11:32:11.371218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.375 [2024-12-10 11:32:11.371227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:44.375 [2024-12-10 11:32:11.371235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:44.375 [2024-12-10 11:32:11.371245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.375 [2024-12-10 11:32:11.371253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:44.375 [2024-12-10 11:32:11.371262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:44.375 [2024-12-10 11:32:11.371271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.375 [2024-12-10 11:32:11.371279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:44.375 [2024-12-10 11:32:11.371288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:44.375 [2024-12-10 11:32:11.371297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.375 [2024-12-10 11:32:11.371306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:44.375 [2024-12-10 11:32:11.371315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:44.375 [2024-12-10 11:32:11.371323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.375 [2024-12-10 11:32:11.371332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:44.375 [2024-12-10 11:32:11.371341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:44.375 [2024-12-10 11:32:11.371349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:44.375 [2024-12-10 11:32:11.371358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:44.375 [2024-12-10 11:32:11.371367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:44.375 [2024-12-10 11:32:11.371375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:44.375 [2024-12-10 11:32:11.371384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:44.375 [2024-12-10 11:32:11.371393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:44.375 [2024-12-10 11:32:11.371402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.375 [2024-12-10 11:32:11.371411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:44.375 [2024-12-10 11:32:11.371419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:44.375 [2024-12-10 11:32:11.371429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.375 [2024-12-10 11:32:11.371437] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:44.375 [2024-12-10 11:32:11.371447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:44.375 [2024-12-10 11:32:11.371460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:44.375 [2024-12-10 11:32:11.371470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.375 [2024-12-10 11:32:11.371479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:44.375 [2024-12-10 11:32:11.371488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:44.375 [2024-12-10 11:32:11.371497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:44.375 [2024-12-10 11:32:11.371506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:44.375 [2024-12-10 11:32:11.371515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:44.375 [2024-12-10 11:32:11.371524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:44.375 [2024-12-10 11:32:11.371534] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:44.375 [2024-12-10 11:32:11.371545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:44.375 [2024-12-10 11:32:11.371557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:44.375 [2024-12-10 11:32:11.371566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:44.375 [2024-12-10 11:32:11.371577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:44.375 [2024-12-10 11:32:11.371586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:44.375 [2024-12-10 11:32:11.371598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:44.375 [2024-12-10 11:32:11.371607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:44.375 [2024-12-10 11:32:11.371617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:44.375 [2024-12-10 11:32:11.371627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:44.375 [2024-12-10 11:32:11.371637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:44.375 [2024-12-10 11:32:11.371647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:44.375 [2024-12-10 11:32:11.371657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:44.375 [2024-12-10 11:32:11.371667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:44.375 [2024-12-10 11:32:11.371677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:44.375 [2024-12-10 11:32:11.371687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:44.375 [2024-12-10 11:32:11.371696] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:44.375 [2024-12-10 11:32:11.371707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:44.375 [2024-12-10 11:32:11.371719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:44.375 [2024-12-10 11:32:11.371729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:44.375 [2024-12-10 11:32:11.371739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:44.375 [2024-12-10 11:32:11.371750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:44.375 [2024-12-10 11:32:11.371761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.375 [2024-12-10 11:32:11.371775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:44.375 [2024-12-10 11:32:11.371785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:24:44.375 [2024-12-10 11:32:11.371794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.375 [2024-12-10 11:32:11.411675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.375 [2024-12-10 11:32:11.411846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:44.375 [2024-12-10 11:32:11.412002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.888 ms 00:24:44.375 [2024-12-10 11:32:11.412042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.375 [2024-12-10 11:32:11.412187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.375 [2024-12-10 11:32:11.412319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:44.375 [2024-12-10 11:32:11.412412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:44.375 [2024-12-10 11:32:11.412443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.375 [2024-12-10 11:32:11.482619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.375 [2024-12-10 11:32:11.482783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:44.375 [2024-12-10 11:32:11.482890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.244 ms 00:24:44.375 [2024-12-10 11:32:11.482947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.375 [2024-12-10 11:32:11.483065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.375 [2024-12-10 11:32:11.483170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:44.375 [2024-12-10 11:32:11.483209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:44.375 [2024-12-10 11:32:11.483239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.375 [2024-12-10 11:32:11.483760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.375 [2024-12-10 11:32:11.483872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:44.375 [2024-12-10 11:32:11.483963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:24:44.375 [2024-12-10 11:32:11.484000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.375 [2024-12-10 11:32:11.484145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.376 [2024-12-10 11:32:11.484220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:44.376 [2024-12-10 11:32:11.484292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:24:44.376 [2024-12-10 11:32:11.484322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.635 [2024-12-10 11:32:11.503518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.635 [2024-12-10 11:32:11.503646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:44.635 [2024-12-10 11:32:11.503734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.182 ms 00:24:44.635 [2024-12-10 11:32:11.503769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.635 [2024-12-10 11:32:11.521593] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:44.635 [2024-12-10 11:32:11.521761] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:44.635 [2024-12-10 11:32:11.521866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.635 [2024-12-10 11:32:11.521899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:44.635 [2024-12-10 11:32:11.521948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.998 ms 00:24:44.635 [2024-12-10 11:32:11.521980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.635 [2024-12-10 11:32:11.549673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.635 [2024-12-10 11:32:11.549798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:44.635 [2024-12-10 11:32:11.549883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.642 ms 00:24:44.635 [2024-12-10 11:32:11.549928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.635 [2024-12-10 11:32:11.567029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.635 [2024-12-10 11:32:11.567148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:44.635 [2024-12-10 11:32:11.567298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.027 ms 00:24:44.635 [2024-12-10 11:32:11.567335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.635 [2024-12-10 11:32:11.584511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.635 [2024-12-10 11:32:11.584631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:44.635 [2024-12-10 11:32:11.584715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.109 ms 00:24:44.635 [2024-12-10 11:32:11.584749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.635 [2024-12-10 11:32:11.585529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.635 [2024-12-10 11:32:11.585657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:44.635 [2024-12-10 11:32:11.585733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:24:44.635 [2024-12-10 11:32:11.585748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.635 [2024-12-10 11:32:11.666905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.635 [2024-12-10 11:32:11.668734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:44.635 [2024-12-10 11:32:11.668879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.253 ms 00:24:44.635 [2024-12-10 11:32:11.668929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.635 [2024-12-10 11:32:11.679136] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:44.635 [2024-12-10 11:32:11.694562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.635 [2024-12-10 11:32:11.694752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:44.635 [2024-12-10 11:32:11.694792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.558 ms 00:24:44.635 [2024-12-10 11:32:11.694811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.635 [2024-12-10 11:32:11.694952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.635 [2024-12-10 11:32:11.694967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:44.635 [2024-12-10 11:32:11.694978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:44.635 [2024-12-10 11:32:11.694989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.635 [2024-12-10 11:32:11.695049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.635 [2024-12-10 11:32:11.695061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:44.635 [2024-12-10 11:32:11.695072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:44.635 [2024-12-10 11:32:11.695086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.635 [2024-12-10 11:32:11.695119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.635 [2024-12-10 11:32:11.695133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:44.635 [2024-12-10 11:32:11.695143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:44.635 [2024-12-10 11:32:11.695153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.635 [2024-12-10 11:32:11.695191] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:44.635 [2024-12-10 11:32:11.695204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.635 [2024-12-10 11:32:11.695214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:44.635 [2024-12-10 11:32:11.695224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:44.635 [2024-12-10 11:32:11.695235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.635 [2024-12-10 11:32:11.729521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.635 [2024-12-10 11:32:11.729560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:44.635 [2024-12-10 11:32:11.729574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.320 ms 00:24:44.635 [2024-12-10 11:32:11.729583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.635 [2024-12-10 11:32:11.729686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.635 [2024-12-10 11:32:11.729699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:44.635 [2024-12-10 11:32:11.729709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:44.635 [2024-12-10 11:32:11.729718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.635 [2024-12-10 11:32:11.730652] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:44.636 [2024-12-10 11:32:11.734780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.974 ms, result 0 00:24:44.636 [2024-12-10 11:32:11.735717] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:44.895 [2024-12-10 11:32:11.753384] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:44.895  [2024-12-10T11:32:12.009Z] Copying: 4096/4096 [kB] (average 25 MBps)[2024-12-10 11:32:11.916895] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:44.895 [2024-12-10 11:32:11.931211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.895 [2024-12-10 11:32:11.931357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:44.895 [2024-12-10 11:32:11.931386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:44.895 [2024-12-10 11:32:11.931396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.895 [2024-12-10 11:32:11.931427] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:44.895 [2024-12-10 11:32:11.935408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.895 [2024-12-10 11:32:11.935436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:44.895 [2024-12-10 11:32:11.935447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.971 ms 00:24:44.895 [2024-12-10 11:32:11.935456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.895 [2024-12-10 11:32:11.937304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.895 [2024-12-10 11:32:11.937339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:44.895 [2024-12-10 11:32:11.937352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.812 ms 00:24:44.895 [2024-12-10 11:32:11.937363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.895 [2024-12-10 11:32:11.940645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.895 [2024-12-10 11:32:11.940675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:44.895 [2024-12-10 11:32:11.940687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.264 ms 00:24:44.895 [2024-12-10 11:32:11.940697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.895 [2024-12-10 11:32:11.946263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.895 [2024-12-10 11:32:11.946402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:44.895 [2024-12-10 11:32:11.946422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.544 ms 00:24:44.895 [2024-12-10 11:32:11.946432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.895 [2024-12-10 11:32:11.981645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.895 [2024-12-10 11:32:11.981682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:44.895 [2024-12-10 11:32:11.981696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.209 ms 00:24:44.895 [2024-12-10 11:32:11.981721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.895 [2024-12-10 11:32:12.001454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.895 [2024-12-10 11:32:12.001496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:44.895 [2024-12-10 11:32:12.001508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.710 ms 00:24:44.895 [2024-12-10 11:32:12.001518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.895 [2024-12-10 11:32:12.001660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.895 [2024-12-10 11:32:12.001673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:44.895 [2024-12-10 11:32:12.001693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:44.895 [2024-12-10 11:32:12.001703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.155 [2024-12-10 11:32:12.036319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.155 [2024-12-10 11:32:12.036353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:45.155 [2024-12-10 11:32:12.036365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.655 ms 00:24:45.155 [2024-12-10 11:32:12.036374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.155 [2024-12-10 11:32:12.070371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.155 [2024-12-10 11:32:12.070406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:45.155 [2024-12-10 11:32:12.070418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.000 ms 00:24:45.155 [2024-12-10 11:32:12.070426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.155 [2024-12-10 11:32:12.104103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.155 [2024-12-10 11:32:12.104266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:45.155 [2024-12-10 11:32:12.104287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.662 ms 00:24:45.155 [2024-12-10 11:32:12.104297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.155 [2024-12-10 11:32:12.138036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.155 [2024-12-10 11:32:12.138214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:45.155 [2024-12-10 11:32:12.138233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.657 ms 00:24:45.155 [2024-12-10 11:32:12.138244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.155 [2024-12-10 11:32:12.138296] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:45.155 [2024-12-10 11:32:12.138313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:45.155 [2024-12-10 11:32:12.138683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.138999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:45.156 [2024-12-10 11:32:12.139410] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:45.156 [2024-12-10 11:32:12.139419] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a8cf975-d8b5-43ec-a656-efac3c3b89a7 00:24:45.156 [2024-12-10 11:32:12.139430] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:45.156 [2024-12-10 11:32:12.139440] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:45.156 [2024-12-10 11:32:12.139449] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:45.156 [2024-12-10 11:32:12.139458] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:45.156 [2024-12-10 11:32:12.139467] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:45.156 [2024-12-10 11:32:12.139477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:45.156 [2024-12-10 11:32:12.139491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:45.156 [2024-12-10 11:32:12.139500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:45.156 [2024-12-10 11:32:12.139509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:45.156 [2024-12-10 11:32:12.139518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.156 [2024-12-10 11:32:12.139530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:45.156 [2024-12-10 11:32:12.139541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.224 ms 00:24:45.156 [2024-12-10 11:32:12.139551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.156 [2024-12-10 11:32:12.158727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.156 [2024-12-10 11:32:12.158761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:45.156 [2024-12-10 11:32:12.158773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.186 ms 00:24:45.156 [2024-12-10 11:32:12.158783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.156 [2024-12-10 11:32:12.159301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.156 [2024-12-10 11:32:12.159317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:45.156 [2024-12-10 11:32:12.159328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:24:45.156 [2024-12-10 11:32:12.159338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.156 [2024-12-10 11:32:12.213926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.156 [2024-12-10 11:32:12.213962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:45.156 [2024-12-10 11:32:12.213974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.156 [2024-12-10 11:32:12.214004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.156 [2024-12-10 11:32:12.214087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.156 [2024-12-10 11:32:12.214099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:45.156 [2024-12-10 11:32:12.214109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.156 [2024-12-10 11:32:12.214119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.156 [2024-12-10 11:32:12.214165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.156 [2024-12-10 11:32:12.214177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:45.156 [2024-12-10 11:32:12.214188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.156 [2024-12-10 11:32:12.214198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.157 [2024-12-10 11:32:12.214220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.157 [2024-12-10 11:32:12.214230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:45.157 [2024-12-10 11:32:12.214240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.157 [2024-12-10 11:32:12.214250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.416 [2024-12-10 11:32:12.331048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.416 [2024-12-10 11:32:12.331099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:45.416 [2024-12-10 11:32:12.331114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.416 [2024-12-10 11:32:12.331129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.416 [2024-12-10 11:32:12.425988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.416 [2024-12-10 11:32:12.426036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:45.416 [2024-12-10 11:32:12.426049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.416 [2024-12-10 11:32:12.426059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.416 [2024-12-10 11:32:12.426123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.416 [2024-12-10 11:32:12.426134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:45.416 [2024-12-10 11:32:12.426143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.416 [2024-12-10 11:32:12.426153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.416 [2024-12-10 11:32:12.426180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.416 [2024-12-10 11:32:12.426202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:45.416 [2024-12-10 11:32:12.426211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.416 [2024-12-10 11:32:12.426221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.416 [2024-12-10 11:32:12.426323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.416 [2024-12-10 11:32:12.426336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:45.416 [2024-12-10 11:32:12.426346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.416 [2024-12-10 11:32:12.426356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.416 [2024-12-10 11:32:12.426391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.416 [2024-12-10 11:32:12.426403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:45.416 [2024-12-10 11:32:12.426420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.416 [2024-12-10 11:32:12.426430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.416 [2024-12-10 11:32:12.426467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.416 [2024-12-10 11:32:12.426477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:45.416 [2024-12-10 11:32:12.426487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.416 [2024-12-10 11:32:12.426496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.416 [2024-12-10 11:32:12.426537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.416 [2024-12-10 11:32:12.426555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:45.416 [2024-12-10 11:32:12.426565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.416 [2024-12-10 11:32:12.426574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.416 [2024-12-10 11:32:12.426717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 496.305 ms, result 0 00:24:46.354 00:24:46.354 00:24:46.354 11:32:13 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78901 00:24:46.354 11:32:13 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:24:46.354 11:32:13 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78901 00:24:46.354 11:32:13 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78901 ']' 00:24:46.354 11:32:13 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.354 11:32:13 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.354 11:32:13 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.354 11:32:13 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.354 11:32:13 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:46.613 [2024-12-10 11:32:13.558162] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:46.613 [2024-12-10 11:32:13.558491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78901 ] 00:24:46.872 [2024-12-10 11:32:13.740795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.872 [2024-12-10 11:32:13.845455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.818 11:32:14 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.818 11:32:14 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:24:47.818 11:32:14 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:24:47.818 [2024-12-10 11:32:14.886185] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:47.818 [2024-12-10 11:32:14.886441] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:48.132 [2024-12-10 11:32:15.067794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.067842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:48.133 [2024-12-10 11:32:15.067877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:48.133 [2024-12-10 11:32:15.067888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.071612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.071649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:48.133 [2024-12-10 11:32:15.071663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.709 ms 00:24:48.133 [2024-12-10 11:32:15.071690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.071796] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:48.133 [2024-12-10 11:32:15.072752] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:48.133 [2024-12-10 11:32:15.072787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.072798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:48.133 [2024-12-10 11:32:15.072811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:24:48.133 [2024-12-10 11:32:15.072821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.074472] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:48.133 [2024-12-10 11:32:15.093078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.093122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:48.133 [2024-12-10 11:32:15.093153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.639 ms 00:24:48.133 [2024-12-10 11:32:15.093168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.093269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.093288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:48.133 [2024-12-10 11:32:15.093299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:48.133 [2024-12-10 11:32:15.093314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.100149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.100188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:48.133 [2024-12-10 11:32:15.100200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.792 ms 00:24:48.133 [2024-12-10 11:32:15.100214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.100343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.100361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:48.133 [2024-12-10 11:32:15.100371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:48.133 [2024-12-10 11:32:15.100392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.100416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.100431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:48.133 [2024-12-10 11:32:15.100441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:48.133 [2024-12-10 11:32:15.100455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.100478] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:48.133 [2024-12-10 11:32:15.105143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.105175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:48.133 [2024-12-10 11:32:15.105190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.673 ms 00:24:48.133 [2024-12-10 11:32:15.105216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.105297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.105310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:48.133 [2024-12-10 11:32:15.105325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:48.133 [2024-12-10 11:32:15.105340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.105367] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:48.133 [2024-12-10 11:32:15.105394] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:48.133 [2024-12-10 11:32:15.105453] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:48.133 [2024-12-10 11:32:15.105473] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:48.133 [2024-12-10 11:32:15.105563] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:48.133 [2024-12-10 11:32:15.105577] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:48.133 [2024-12-10 11:32:15.105600] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:48.133 [2024-12-10 11:32:15.105614] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:48.133 [2024-12-10 11:32:15.105630] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:48.133 [2024-12-10 11:32:15.105641] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:48.133 [2024-12-10 11:32:15.105655] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:48.133 [2024-12-10 11:32:15.105665] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:48.133 [2024-12-10 11:32:15.105684] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:48.133 [2024-12-10 11:32:15.105695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.105709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:48.133 [2024-12-10 11:32:15.105719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:24:48.133 [2024-12-10 11:32:15.105733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.105811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.105827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:48.133 [2024-12-10 11:32:15.105837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:48.133 [2024-12-10 11:32:15.105851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.105957] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:48.133 [2024-12-10 11:32:15.105976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:48.133 [2024-12-10 11:32:15.105986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:48.133 [2024-12-10 11:32:15.106001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.133 [2024-12-10 11:32:15.106011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:48.133 [2024-12-10 11:32:15.106027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:48.133 [2024-12-10 11:32:15.106036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:48.133 [2024-12-10 11:32:15.106055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:48.133 [2024-12-10 11:32:15.106065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:48.133 [2024-12-10 11:32:15.106079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:48.133 [2024-12-10 11:32:15.106088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:48.133 [2024-12-10 11:32:15.106102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:48.133 [2024-12-10 11:32:15.106111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:48.133 [2024-12-10 11:32:15.106126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:48.133 [2024-12-10 11:32:15.106136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:48.133 [2024-12-10 11:32:15.106150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.133 [2024-12-10 11:32:15.106159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:48.133 [2024-12-10 11:32:15.106173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:48.133 [2024-12-10 11:32:15.106193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.133 [2024-12-10 11:32:15.106207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:48.133 [2024-12-10 11:32:15.106217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:48.133 [2024-12-10 11:32:15.106230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.133 [2024-12-10 11:32:15.106239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:48.133 [2024-12-10 11:32:15.106258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:48.133 [2024-12-10 11:32:15.106267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.133 [2024-12-10 11:32:15.106281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:48.133 [2024-12-10 11:32:15.106290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:48.133 [2024-12-10 11:32:15.106303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.133 [2024-12-10 11:32:15.106312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:48.133 [2024-12-10 11:32:15.106328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:48.133 [2024-12-10 11:32:15.106337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.133 [2024-12-10 11:32:15.106351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:48.133 [2024-12-10 11:32:15.106360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:48.133 [2024-12-10 11:32:15.106374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:48.133 [2024-12-10 11:32:15.106384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:48.133 [2024-12-10 11:32:15.106398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:48.133 [2024-12-10 11:32:15.106407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:48.133 [2024-12-10 11:32:15.106420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:48.133 [2024-12-10 11:32:15.106429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:48.133 [2024-12-10 11:32:15.106447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.133 [2024-12-10 11:32:15.106456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:48.133 [2024-12-10 11:32:15.106470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:48.133 [2024-12-10 11:32:15.106479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.133 [2024-12-10 11:32:15.106493] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:48.133 [2024-12-10 11:32:15.106507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:48.133 [2024-12-10 11:32:15.106523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:48.133 [2024-12-10 11:32:15.106533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.133 [2024-12-10 11:32:15.106548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:48.133 [2024-12-10 11:32:15.106557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:48.133 [2024-12-10 11:32:15.106570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:48.133 [2024-12-10 11:32:15.106580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:48.133 [2024-12-10 11:32:15.106594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:48.133 [2024-12-10 11:32:15.106603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:48.133 [2024-12-10 11:32:15.106617] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:48.133 [2024-12-10 11:32:15.106629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:48.133 [2024-12-10 11:32:15.106651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:48.133 [2024-12-10 11:32:15.106662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:48.133 [2024-12-10 11:32:15.106677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:48.133 [2024-12-10 11:32:15.106687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:48.133 [2024-12-10 11:32:15.106702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:48.133 [2024-12-10 11:32:15.106712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:48.133 [2024-12-10 11:32:15.106727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:48.133 [2024-12-10 11:32:15.106737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:48.133 [2024-12-10 11:32:15.106752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:48.133 [2024-12-10 11:32:15.106762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:48.133 [2024-12-10 11:32:15.106777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:48.133 [2024-12-10 11:32:15.106787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:48.133 [2024-12-10 11:32:15.106802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:48.133 [2024-12-10 11:32:15.106812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:48.133 [2024-12-10 11:32:15.106826] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:48.133 [2024-12-10 11:32:15.106837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:48.133 [2024-12-10 11:32:15.106858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:48.133 [2024-12-10 11:32:15.106868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:48.133 [2024-12-10 11:32:15.106882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:48.133 [2024-12-10 11:32:15.106893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:48.133 [2024-12-10 11:32:15.106908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.106929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:48.133 [2024-12-10 11:32:15.106944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:24:48.133 [2024-12-10 11:32:15.106959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.145169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.145350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:48.133 [2024-12-10 11:32:15.145583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.206 ms 00:24:48.133 [2024-12-10 11:32:15.145627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.145767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.145925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:48.133 [2024-12-10 11:32:15.146029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:48.133 [2024-12-10 11:32:15.146061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.192052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.192237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:48.133 [2024-12-10 11:32:15.192341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.012 ms 00:24:48.133 [2024-12-10 11:32:15.192381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.192494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.192584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:48.133 [2024-12-10 11:32:15.192629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:48.133 [2024-12-10 11:32:15.192662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.133 [2024-12-10 11:32:15.193190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.133 [2024-12-10 11:32:15.193305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:48.133 [2024-12-10 11:32:15.193383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:24:48.134 [2024-12-10 11:32:15.193427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.134 [2024-12-10 11:32:15.193580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.134 [2024-12-10 11:32:15.193622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:48.134 [2024-12-10 11:32:15.193701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:24:48.134 [2024-12-10 11:32:15.193733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.134 [2024-12-10 11:32:15.214318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.134 [2024-12-10 11:32:15.214482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:48.134 [2024-12-10 11:32:15.214579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.571 ms 00:24:48.134 [2024-12-10 11:32:15.214617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.411 [2024-12-10 11:32:15.262399] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:48.411 [2024-12-10 11:32:15.262612] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:48.411 [2024-12-10 11:32:15.262789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.411 [2024-12-10 11:32:15.262839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:48.411 [2024-12-10 11:32:15.262889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.101 ms 00:24:48.411 [2024-12-10 11:32:15.263187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.411 [2024-12-10 11:32:15.293286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.411 [2024-12-10 11:32:15.293433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:48.411 [2024-12-10 11:32:15.293524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.983 ms 00:24:48.411 [2024-12-10 11:32:15.293565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.411 [2024-12-10 11:32:15.311780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.411 [2024-12-10 11:32:15.311940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:48.411 [2024-12-10 11:32:15.312059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.111 ms 00:24:48.411 [2024-12-10 11:32:15.312099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.411 [2024-12-10 11:32:15.329905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.411 [2024-12-10 11:32:15.330050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:48.411 [2024-12-10 11:32:15.330079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.732 ms 00:24:48.411 [2024-12-10 11:32:15.330090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.411 [2024-12-10 11:32:15.330954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.411 [2024-12-10 11:32:15.330986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:48.411 [2024-12-10 11:32:15.331004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:24:48.411 [2024-12-10 11:32:15.331014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.411 [2024-12-10 11:32:15.416990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.411 [2024-12-10 11:32:15.417047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:48.411 [2024-12-10 11:32:15.417086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.077 ms 00:24:48.411 [2024-12-10 11:32:15.417098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.411 [2024-12-10 11:32:15.427785] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:48.411 [2024-12-10 11:32:15.443886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.411 [2024-12-10 11:32:15.443956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:48.411 [2024-12-10 11:32:15.443994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.705 ms 00:24:48.411 [2024-12-10 11:32:15.444011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.411 [2024-12-10 11:32:15.444115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.411 [2024-12-10 11:32:15.444134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:48.411 [2024-12-10 11:32:15.444145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:48.411 [2024-12-10 11:32:15.444160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.411 [2024-12-10 11:32:15.444215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.411 [2024-12-10 11:32:15.444233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:48.412 [2024-12-10 11:32:15.444244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:48.412 [2024-12-10 11:32:15.444264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.412 [2024-12-10 11:32:15.444290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.412 [2024-12-10 11:32:15.444306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:48.412 [2024-12-10 11:32:15.444317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:48.412 [2024-12-10 11:32:15.444332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.412 [2024-12-10 11:32:15.444376] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:48.412 [2024-12-10 11:32:15.444399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.412 [2024-12-10 11:32:15.444415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:48.412 [2024-12-10 11:32:15.444430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:48.412 [2024-12-10 11:32:15.444440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.412 [2024-12-10 11:32:15.479912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.412 [2024-12-10 11:32:15.479969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:48.412 [2024-12-10 11:32:15.479989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.488 ms 00:24:48.412 [2024-12-10 11:32:15.480016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.412 [2024-12-10 11:32:15.480139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.412 [2024-12-10 11:32:15.480153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:48.412 [2024-12-10 11:32:15.480168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:48.412 [2024-12-10 11:32:15.480183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.412 [2024-12-10 11:32:15.481277] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:48.412 [2024-12-10 11:32:15.485500] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 413.802 ms, result 0 00:24:48.412 [2024-12-10 11:32:15.486712] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:48.412 Some configs were skipped because the RPC state that can call them passed over. 00:24:48.671 11:32:15 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:24:48.671 [2024-12-10 11:32:15.713812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.671 [2024-12-10 11:32:15.713878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:48.671 [2024-12-10 11:32:15.713895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:24:48.671 [2024-12-10 11:32:15.713911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.671 [2024-12-10 11:32:15.713963] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.769 ms, result 0 00:24:48.671 true 00:24:48.671 11:32:15 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:24:48.930 [2024-12-10 11:32:15.889394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.930 [2024-12-10 11:32:15.889450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:48.930 [2024-12-10 11:32:15.889470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:24:48.930 [2024-12-10 11:32:15.889481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.930 [2024-12-10 11:32:15.889528] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.417 ms, result 0 00:24:48.930 true 00:24:48.930 11:32:15 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78901 00:24:48.930 11:32:15 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78901 ']' 00:24:48.930 11:32:15 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78901 00:24:48.930 11:32:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:24:48.930 11:32:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:48.931 11:32:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78901 00:24:48.931 11:32:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:48.931 killing process with pid 78901 00:24:48.931 11:32:15 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:48.931 11:32:15 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78901' 00:24:48.931 11:32:15 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78901 00:24:48.931 11:32:15 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78901 00:24:50.311 [2024-12-10 11:32:17.002377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.311 [2024-12-10 11:32:17.002461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:50.311 [2024-12-10 11:32:17.002478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:50.311 [2024-12-10 11:32:17.002490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.311 [2024-12-10 11:32:17.002517] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:50.311 [2024-12-10 11:32:17.006680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.311 [2024-12-10 11:32:17.006710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:50.311 [2024-12-10 11:32:17.006742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.149 ms 00:24:50.311 [2024-12-10 11:32:17.006752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.311 [2024-12-10 11:32:17.007020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.311 [2024-12-10 11:32:17.007034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:50.311 [2024-12-10 11:32:17.007047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:24:50.312 [2024-12-10 11:32:17.007056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.312 [2024-12-10 11:32:17.010467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.312 [2024-12-10 11:32:17.010501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:50.312 [2024-12-10 11:32:17.010518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.394 ms 00:24:50.312 [2024-12-10 11:32:17.010528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.312 [2024-12-10 11:32:17.015983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.312 [2024-12-10 11:32:17.016015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:50.312 [2024-12-10 11:32:17.016045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.424 ms 00:24:50.312 [2024-12-10 11:32:17.016054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.312 [2024-12-10 11:32:17.030222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.312 [2024-12-10 11:32:17.030263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:50.312 [2024-12-10 11:32:17.030296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.131 ms 00:24:50.312 [2024-12-10 11:32:17.030306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.312 [2024-12-10 11:32:17.040338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.312 [2024-12-10 11:32:17.040374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:50.312 [2024-12-10 11:32:17.040404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.978 ms 00:24:50.312 [2024-12-10 11:32:17.040414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.312 [2024-12-10 11:32:17.040558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.312 [2024-12-10 11:32:17.040571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:50.312 [2024-12-10 11:32:17.040584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:24:50.312 [2024-12-10 11:32:17.040593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.312 [2024-12-10 11:32:17.054940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.312 [2024-12-10 11:32:17.054975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:50.312 [2024-12-10 11:32:17.055005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.348 ms 00:24:50.312 [2024-12-10 11:32:17.055014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.312 [2024-12-10 11:32:17.069603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.312 [2024-12-10 11:32:17.069634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:50.312 [2024-12-10 11:32:17.069667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.559 ms 00:24:50.312 [2024-12-10 11:32:17.069676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.312 [2024-12-10 11:32:17.083683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.312 [2024-12-10 11:32:17.083713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:50.312 [2024-12-10 11:32:17.083727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.975 ms 00:24:50.312 [2024-12-10 11:32:17.083736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.312 [2024-12-10 11:32:17.097668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.312 [2024-12-10 11:32:17.097702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:50.312 [2024-12-10 11:32:17.097716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.865 ms 00:24:50.312 [2024-12-10 11:32:17.097725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.312 [2024-12-10 11:32:17.097805] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:50.312 [2024-12-10 11:32:17.097823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.097838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.097849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.097862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.097873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.097888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.097899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.097911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.097932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.097945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.097956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.097968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.097979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.097991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:50.312 [2024-12-10 11:32:17.098532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.098998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.099009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.099025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.099036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.099052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:50.313 [2024-12-10 11:32:17.099080] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:50.313 [2024-12-10 11:32:17.099106] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a8cf975-d8b5-43ec-a656-efac3c3b89a7 00:24:50.313 [2024-12-10 11:32:17.099123] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:50.313 [2024-12-10 11:32:17.099138] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:50.313 [2024-12-10 11:32:17.099148] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:50.313 [2024-12-10 11:32:17.099162] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:50.313 [2024-12-10 11:32:17.099172] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:50.313 [2024-12-10 11:32:17.099187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:50.313 [2024-12-10 11:32:17.099197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:50.313 [2024-12-10 11:32:17.099210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:50.313 [2024-12-10 11:32:17.099219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:50.313 [2024-12-10 11:32:17.099234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.313 [2024-12-10 11:32:17.099245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:50.313 [2024-12-10 11:32:17.099260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.431 ms 00:24:50.313 [2024-12-10 11:32:17.099270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.313 [2024-12-10 11:32:17.118337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.313 [2024-12-10 11:32:17.118368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:50.313 [2024-12-10 11:32:17.118388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.061 ms 00:24:50.313 [2024-12-10 11:32:17.118398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.313 [2024-12-10 11:32:17.118981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.313 [2024-12-10 11:32:17.118995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:50.313 [2024-12-10 11:32:17.119016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:24:50.313 [2024-12-10 11:32:17.119027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.313 [2024-12-10 11:32:17.187022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.313 [2024-12-10 11:32:17.187056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:50.313 [2024-12-10 11:32:17.187089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.313 [2024-12-10 11:32:17.187100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.313 [2024-12-10 11:32:17.187182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.313 [2024-12-10 11:32:17.187194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:50.313 [2024-12-10 11:32:17.187214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.313 [2024-12-10 11:32:17.187224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.313 [2024-12-10 11:32:17.187278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.313 [2024-12-10 11:32:17.187290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:50.313 [2024-12-10 11:32:17.187309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.313 [2024-12-10 11:32:17.187319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.313 [2024-12-10 11:32:17.187342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.313 [2024-12-10 11:32:17.187352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:50.313 [2024-12-10 11:32:17.187366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.313 [2024-12-10 11:32:17.187381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.313 [2024-12-10 11:32:17.305168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.313 [2024-12-10 11:32:17.305221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:50.313 [2024-12-10 11:32:17.305240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.313 [2024-12-10 11:32:17.305266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.313 [2024-12-10 11:32:17.403033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.313 [2024-12-10 11:32:17.403076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:50.313 [2024-12-10 11:32:17.403094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.313 [2024-12-10 11:32:17.403125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.313 [2024-12-10 11:32:17.403206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.313 [2024-12-10 11:32:17.403219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:50.313 [2024-12-10 11:32:17.403239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.313 [2024-12-10 11:32:17.403249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.313 [2024-12-10 11:32:17.403283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.313 [2024-12-10 11:32:17.403293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:50.313 [2024-12-10 11:32:17.403308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.313 [2024-12-10 11:32:17.403318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.313 [2024-12-10 11:32:17.403458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.313 [2024-12-10 11:32:17.403472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:50.313 [2024-12-10 11:32:17.403487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.313 [2024-12-10 11:32:17.403498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.313 [2024-12-10 11:32:17.403542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.313 [2024-12-10 11:32:17.403554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:50.313 [2024-12-10 11:32:17.403570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.313 [2024-12-10 11:32:17.403580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.313 [2024-12-10 11:32:17.403632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.313 [2024-12-10 11:32:17.403643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:50.314 [2024-12-10 11:32:17.403663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.314 [2024-12-10 11:32:17.403674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.314 [2024-12-10 11:32:17.403724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.314 [2024-12-10 11:32:17.403736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:50.314 [2024-12-10 11:32:17.403751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.314 [2024-12-10 11:32:17.403761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.314 [2024-12-10 11:32:17.403912] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 402.147 ms, result 0 00:24:51.692 11:32:18 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:51.692 [2024-12-10 11:32:18.456229] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:51.692 [2024-12-10 11:32:18.456363] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78965 ] 00:24:51.692 [2024-12-10 11:32:18.636232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.692 [2024-12-10 11:32:18.745687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.262 [2024-12-10 11:32:19.098471] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:52.262 [2024-12-10 11:32:19.098536] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:52.262 [2024-12-10 11:32:19.259635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.262 [2024-12-10 11:32:19.259681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:52.262 [2024-12-10 11:32:19.259696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:52.262 [2024-12-10 11:32:19.259707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.262 [2024-12-10 11:32:19.262816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.262 [2024-12-10 11:32:19.262856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:52.262 [2024-12-10 11:32:19.262884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.078 ms 00:24:52.262 [2024-12-10 11:32:19.262894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.262 [2024-12-10 11:32:19.263016] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:52.262 [2024-12-10 11:32:19.263932] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:52.262 [2024-12-10 11:32:19.263961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.262 [2024-12-10 11:32:19.263972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:52.262 [2024-12-10 11:32:19.263984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:24:52.262 [2024-12-10 11:32:19.263993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.262 [2024-12-10 11:32:19.265485] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:52.262 [2024-12-10 11:32:19.284047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.262 [2024-12-10 11:32:19.284086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:52.262 [2024-12-10 11:32:19.284099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.593 ms 00:24:52.262 [2024-12-10 11:32:19.284108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.262 [2024-12-10 11:32:19.284223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.262 [2024-12-10 11:32:19.284237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:52.262 [2024-12-10 11:32:19.284249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:52.262 [2024-12-10 11:32:19.284258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.262 [2024-12-10 11:32:19.291018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.262 [2024-12-10 11:32:19.291045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:52.262 [2024-12-10 11:32:19.291056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.730 ms 00:24:52.262 [2024-12-10 11:32:19.291065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.262 [2024-12-10 11:32:19.291177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.262 [2024-12-10 11:32:19.291193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:52.262 [2024-12-10 11:32:19.291204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:52.262 [2024-12-10 11:32:19.291214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.262 [2024-12-10 11:32:19.291245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.262 [2024-12-10 11:32:19.291257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:52.262 [2024-12-10 11:32:19.291267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:52.262 [2024-12-10 11:32:19.291276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.262 [2024-12-10 11:32:19.291298] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:52.262 [2024-12-10 11:32:19.296027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.262 [2024-12-10 11:32:19.296056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:52.262 [2024-12-10 11:32:19.296068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.741 ms 00:24:52.262 [2024-12-10 11:32:19.296077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.262 [2024-12-10 11:32:19.296163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.262 [2024-12-10 11:32:19.296176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:52.262 [2024-12-10 11:32:19.296198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:52.262 [2024-12-10 11:32:19.296208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.262 [2024-12-10 11:32:19.296235] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:52.262 [2024-12-10 11:32:19.296258] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:52.262 [2024-12-10 11:32:19.296292] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:52.262 [2024-12-10 11:32:19.296310] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:52.262 [2024-12-10 11:32:19.296399] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:52.263 [2024-12-10 11:32:19.296412] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:52.263 [2024-12-10 11:32:19.296424] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:52.263 [2024-12-10 11:32:19.296441] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:52.263 [2024-12-10 11:32:19.296453] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:52.263 [2024-12-10 11:32:19.296464] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:52.263 [2024-12-10 11:32:19.296475] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:52.263 [2024-12-10 11:32:19.296485] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:52.263 [2024-12-10 11:32:19.296501] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:52.263 [2024-12-10 11:32:19.296511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.263 [2024-12-10 11:32:19.296521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:52.263 [2024-12-10 11:32:19.296531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:24:52.263 [2024-12-10 11:32:19.296541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.263 [2024-12-10 11:32:19.296616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.263 [2024-12-10 11:32:19.296631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:52.263 [2024-12-10 11:32:19.296641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:52.263 [2024-12-10 11:32:19.296651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.263 [2024-12-10 11:32:19.296737] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:52.263 [2024-12-10 11:32:19.296749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:52.263 [2024-12-10 11:32:19.296759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:52.263 [2024-12-10 11:32:19.296769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.263 [2024-12-10 11:32:19.296780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:52.263 [2024-12-10 11:32:19.296789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:52.263 [2024-12-10 11:32:19.296799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:52.263 [2024-12-10 11:32:19.296810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:52.263 [2024-12-10 11:32:19.296819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:52.263 [2024-12-10 11:32:19.296828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:52.263 [2024-12-10 11:32:19.296838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:52.263 [2024-12-10 11:32:19.296856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:52.263 [2024-12-10 11:32:19.296866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:52.263 [2024-12-10 11:32:19.296875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:52.263 [2024-12-10 11:32:19.296885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:52.263 [2024-12-10 11:32:19.296894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.263 [2024-12-10 11:32:19.296903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:52.263 [2024-12-10 11:32:19.296912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:52.263 [2024-12-10 11:32:19.296922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.263 [2024-12-10 11:32:19.296948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:52.263 [2024-12-10 11:32:19.296958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:52.263 [2024-12-10 11:32:19.296967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.263 [2024-12-10 11:32:19.296977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:52.263 [2024-12-10 11:32:19.296986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:52.263 [2024-12-10 11:32:19.296995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.263 [2024-12-10 11:32:19.297004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:52.263 [2024-12-10 11:32:19.297013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:52.263 [2024-12-10 11:32:19.297022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.263 [2024-12-10 11:32:19.297032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:52.263 [2024-12-10 11:32:19.297041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:52.263 [2024-12-10 11:32:19.297050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.263 [2024-12-10 11:32:19.297059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:52.263 [2024-12-10 11:32:19.297068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:52.263 [2024-12-10 11:32:19.297076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:52.263 [2024-12-10 11:32:19.297085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:52.263 [2024-12-10 11:32:19.297094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:52.263 [2024-12-10 11:32:19.297103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:52.263 [2024-12-10 11:32:19.297112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:52.263 [2024-12-10 11:32:19.297121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:52.263 [2024-12-10 11:32:19.297130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.263 [2024-12-10 11:32:19.297139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:52.263 [2024-12-10 11:32:19.297149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:52.263 [2024-12-10 11:32:19.297158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.263 [2024-12-10 11:32:19.297169] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:52.263 [2024-12-10 11:32:19.297179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:52.263 [2024-12-10 11:32:19.297192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:52.263 [2024-12-10 11:32:19.297202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.263 [2024-12-10 11:32:19.297212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:52.263 [2024-12-10 11:32:19.297222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:52.263 [2024-12-10 11:32:19.297231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:52.263 [2024-12-10 11:32:19.297240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:52.263 [2024-12-10 11:32:19.297249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:52.263 [2024-12-10 11:32:19.297264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:52.263 [2024-12-10 11:32:19.297274] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:52.263 [2024-12-10 11:32:19.297287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:52.263 [2024-12-10 11:32:19.297299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:52.263 [2024-12-10 11:32:19.297309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:52.263 [2024-12-10 11:32:19.297320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:52.263 [2024-12-10 11:32:19.297330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:52.263 [2024-12-10 11:32:19.297340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:52.263 [2024-12-10 11:32:19.297350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:52.263 [2024-12-10 11:32:19.297360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:52.263 [2024-12-10 11:32:19.297371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:52.263 [2024-12-10 11:32:19.297381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:52.263 [2024-12-10 11:32:19.297391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:52.263 [2024-12-10 11:32:19.297401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:52.263 [2024-12-10 11:32:19.297411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:52.263 [2024-12-10 11:32:19.297429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:52.263 [2024-12-10 11:32:19.297440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:52.263 [2024-12-10 11:32:19.297450] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:52.263 [2024-12-10 11:32:19.297462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:52.263 [2024-12-10 11:32:19.297474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:52.263 [2024-12-10 11:32:19.297485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:52.263 [2024-12-10 11:32:19.297495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:52.263 [2024-12-10 11:32:19.297506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:52.263 [2024-12-10 11:32:19.297517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.263 [2024-12-10 11:32:19.297531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:52.263 [2024-12-10 11:32:19.297541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:24:52.263 [2024-12-10 11:32:19.297551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.263 [2024-12-10 11:32:19.337592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.263 [2024-12-10 11:32:19.337629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:52.263 [2024-12-10 11:32:19.337659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.045 ms 00:24:52.263 [2024-12-10 11:32:19.337670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.263 [2024-12-10 11:32:19.337794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.263 [2024-12-10 11:32:19.337807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:52.263 [2024-12-10 11:32:19.337818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:52.264 [2024-12-10 11:32:19.337829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.523 [2024-12-10 11:32:19.395025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.523 [2024-12-10 11:32:19.395066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:52.523 [2024-12-10 11:32:19.395083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.264 ms 00:24:52.523 [2024-12-10 11:32:19.395094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.523 [2024-12-10 11:32:19.395203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.523 [2024-12-10 11:32:19.395217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:52.523 [2024-12-10 11:32:19.395228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:52.523 [2024-12-10 11:32:19.395238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.523 [2024-12-10 11:32:19.395680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.523 [2024-12-10 11:32:19.395693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:52.523 [2024-12-10 11:32:19.395708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:24:52.523 [2024-12-10 11:32:19.395718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.523 [2024-12-10 11:32:19.395835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.523 [2024-12-10 11:32:19.395849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:52.523 [2024-12-10 11:32:19.395859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:24:52.523 [2024-12-10 11:32:19.395869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.523 [2024-12-10 11:32:19.415548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.523 [2024-12-10 11:32:19.415587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:52.523 [2024-12-10 11:32:19.415601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.686 ms 00:24:52.523 [2024-12-10 11:32:19.415612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.523 [2024-12-10 11:32:19.434817] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:52.523 [2024-12-10 11:32:19.434861] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:52.523 [2024-12-10 11:32:19.434877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.523 [2024-12-10 11:32:19.434888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:52.523 [2024-12-10 11:32:19.434900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.173 ms 00:24:52.523 [2024-12-10 11:32:19.434910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.523 [2024-12-10 11:32:19.465267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.524 [2024-12-10 11:32:19.465312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:52.524 [2024-12-10 11:32:19.465327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.300 ms 00:24:52.524 [2024-12-10 11:32:19.465338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.524 [2024-12-10 11:32:19.484094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.524 [2024-12-10 11:32:19.484135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:52.524 [2024-12-10 11:32:19.484148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.691 ms 00:24:52.524 [2024-12-10 11:32:19.484159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.524 [2024-12-10 11:32:19.502312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.524 [2024-12-10 11:32:19.502348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:52.524 [2024-12-10 11:32:19.502360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.104 ms 00:24:52.524 [2024-12-10 11:32:19.502371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.524 [2024-12-10 11:32:19.503169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.524 [2024-12-10 11:32:19.503188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:52.524 [2024-12-10 11:32:19.503200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:24:52.524 [2024-12-10 11:32:19.503210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.524 [2024-12-10 11:32:19.587351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.524 [2024-12-10 11:32:19.587415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:52.524 [2024-12-10 11:32:19.587432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.248 ms 00:24:52.524 [2024-12-10 11:32:19.587458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.524 [2024-12-10 11:32:19.597678] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:52.524 [2024-12-10 11:32:19.613411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.524 [2024-12-10 11:32:19.613463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:52.524 [2024-12-10 11:32:19.613477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.885 ms 00:24:52.524 [2024-12-10 11:32:19.613493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.524 [2024-12-10 11:32:19.613623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.524 [2024-12-10 11:32:19.613637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:52.524 [2024-12-10 11:32:19.613649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:52.524 [2024-12-10 11:32:19.613659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.524 [2024-12-10 11:32:19.613713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.524 [2024-12-10 11:32:19.613724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:52.524 [2024-12-10 11:32:19.613734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:52.524 [2024-12-10 11:32:19.613748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.524 [2024-12-10 11:32:19.613782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.524 [2024-12-10 11:32:19.613795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:52.524 [2024-12-10 11:32:19.613805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:52.524 [2024-12-10 11:32:19.613815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.524 [2024-12-10 11:32:19.613853] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:52.524 [2024-12-10 11:32:19.613881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.524 [2024-12-10 11:32:19.613891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:52.524 [2024-12-10 11:32:19.613901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:52.524 [2024-12-10 11:32:19.613911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.783 [2024-12-10 11:32:19.649285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.783 [2024-12-10 11:32:19.649320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:52.783 [2024-12-10 11:32:19.649334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.410 ms 00:24:52.783 [2024-12-10 11:32:19.649344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.783 [2024-12-10 11:32:19.649476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.783 [2024-12-10 11:32:19.649491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:52.783 [2024-12-10 11:32:19.649501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:52.783 [2024-12-10 11:32:19.649511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.783 [2024-12-10 11:32:19.650428] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:52.783 [2024-12-10 11:32:19.654699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 391.134 ms, result 0 00:24:52.783 [2024-12-10 11:32:19.655475] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:52.783 [2024-12-10 11:32:19.673362] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:53.721  [2024-12-10T11:32:21.772Z] Copying: 27/256 [MB] (27 MBps) [2024-12-10T11:32:23.151Z] Copying: 51/256 [MB] (24 MBps) [2024-12-10T11:32:24.090Z] Copying: 75/256 [MB] (24 MBps) [2024-12-10T11:32:25.028Z] Copying: 100/256 [MB] (24 MBps) [2024-12-10T11:32:25.965Z] Copying: 124/256 [MB] (24 MBps) [2024-12-10T11:32:26.902Z] Copying: 149/256 [MB] (24 MBps) [2024-12-10T11:32:27.839Z] Copying: 174/256 [MB] (25 MBps) [2024-12-10T11:32:28.777Z] Copying: 199/256 [MB] (24 MBps) [2024-12-10T11:32:30.155Z] Copying: 224/256 [MB] (24 MBps) [2024-12-10T11:32:30.155Z] Copying: 249/256 [MB] (24 MBps) [2024-12-10T11:32:30.414Z] Copying: 256/256 [MB] (average 25 MBps)[2024-12-10 11:32:30.252175] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:03.300 [2024-12-10 11:32:30.276583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.300 [2024-12-10 11:32:30.276645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:03.300 [2024-12-10 11:32:30.276670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:03.300 [2024-12-10 11:32:30.276683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.300 [2024-12-10 11:32:30.276715] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:03.300 [2024-12-10 11:32:30.281532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.300 [2024-12-10 11:32:30.281571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:03.300 [2024-12-10 11:32:30.281602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.805 ms 00:25:03.300 [2024-12-10 11:32:30.281613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.301 [2024-12-10 11:32:30.281869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.301 [2024-12-10 11:32:30.281882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:03.301 [2024-12-10 11:32:30.281924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:25:03.301 [2024-12-10 11:32:30.281935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.301 [2024-12-10 11:32:30.284781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.301 [2024-12-10 11:32:30.284808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:03.301 [2024-12-10 11:32:30.284819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.802 ms 00:25:03.301 [2024-12-10 11:32:30.284829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.301 [2024-12-10 11:32:30.290207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.301 [2024-12-10 11:32:30.290249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:03.301 [2024-12-10 11:32:30.290261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.348 ms 00:25:03.301 [2024-12-10 11:32:30.290288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.301 [2024-12-10 11:32:30.325387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.301 [2024-12-10 11:32:30.325457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:03.301 [2024-12-10 11:32:30.325471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.079 ms 00:25:03.301 [2024-12-10 11:32:30.325482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.301 [2024-12-10 11:32:30.345699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.301 [2024-12-10 11:32:30.345736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:03.301 [2024-12-10 11:32:30.345771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.183 ms 00:25:03.301 [2024-12-10 11:32:30.345781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.301 [2024-12-10 11:32:30.345936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.301 [2024-12-10 11:32:30.345951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:03.301 [2024-12-10 11:32:30.345974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:25:03.301 [2024-12-10 11:32:30.345984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.301 [2024-12-10 11:32:30.381116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.301 [2024-12-10 11:32:30.381154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:03.301 [2024-12-10 11:32:30.381167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.169 ms 00:25:03.301 [2024-12-10 11:32:30.381176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.561 [2024-12-10 11:32:30.415202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.561 [2024-12-10 11:32:30.415238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:03.561 [2024-12-10 11:32:30.415251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.010 ms 00:25:03.561 [2024-12-10 11:32:30.415260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.561 [2024-12-10 11:32:30.448756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.561 [2024-12-10 11:32:30.448794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:03.561 [2024-12-10 11:32:30.448806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.481 ms 00:25:03.561 [2024-12-10 11:32:30.448815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.561 [2024-12-10 11:32:30.482210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.561 [2024-12-10 11:32:30.482249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:03.561 [2024-12-10 11:32:30.482261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.354 ms 00:25:03.561 [2024-12-10 11:32:30.482270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.561 [2024-12-10 11:32:30.482340] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:03.561 [2024-12-10 11:32:30.482357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:03.561 [2024-12-10 11:32:30.482704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.482998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:03.562 [2024-12-10 11:32:30.483452] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:03.562 [2024-12-10 11:32:30.483461] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a8cf975-d8b5-43ec-a656-efac3c3b89a7 00:25:03.562 [2024-12-10 11:32:30.483472] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:03.562 [2024-12-10 11:32:30.483482] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:03.562 [2024-12-10 11:32:30.483491] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:03.562 [2024-12-10 11:32:30.483501] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:03.562 [2024-12-10 11:32:30.483511] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:03.562 [2024-12-10 11:32:30.483520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:03.562 [2024-12-10 11:32:30.483534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:03.562 [2024-12-10 11:32:30.483544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:03.562 [2024-12-10 11:32:30.483553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:03.562 [2024-12-10 11:32:30.483562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.562 [2024-12-10 11:32:30.483572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:03.562 [2024-12-10 11:32:30.483583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.225 ms 00:25:03.562 [2024-12-10 11:32:30.483593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.562 [2024-12-10 11:32:30.502708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.562 [2024-12-10 11:32:30.502741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:03.562 [2024-12-10 11:32:30.502768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.126 ms 00:25:03.562 [2024-12-10 11:32:30.502778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.562 [2024-12-10 11:32:30.503322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.562 [2024-12-10 11:32:30.503338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:03.562 [2024-12-10 11:32:30.503349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:25:03.562 [2024-12-10 11:32:30.503359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.562 [2024-12-10 11:32:30.554549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.562 [2024-12-10 11:32:30.554585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:03.562 [2024-12-10 11:32:30.554597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.562 [2024-12-10 11:32:30.554612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.562 [2024-12-10 11:32:30.554698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.562 [2024-12-10 11:32:30.554710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:03.562 [2024-12-10 11:32:30.554720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.562 [2024-12-10 11:32:30.554730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.562 [2024-12-10 11:32:30.554776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.563 [2024-12-10 11:32:30.554788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:03.563 [2024-12-10 11:32:30.554799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.563 [2024-12-10 11:32:30.554809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.563 [2024-12-10 11:32:30.554831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.563 [2024-12-10 11:32:30.554841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:03.563 [2024-12-10 11:32:30.554851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.563 [2024-12-10 11:32:30.554860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.822 [2024-12-10 11:32:30.674072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.822 [2024-12-10 11:32:30.674118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:03.822 [2024-12-10 11:32:30.674131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.822 [2024-12-10 11:32:30.674142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.822 [2024-12-10 11:32:30.770972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.822 [2024-12-10 11:32:30.771016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:03.822 [2024-12-10 11:32:30.771030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.822 [2024-12-10 11:32:30.771041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.822 [2024-12-10 11:32:30.771112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.822 [2024-12-10 11:32:30.771123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:03.822 [2024-12-10 11:32:30.771134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.822 [2024-12-10 11:32:30.771145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.822 [2024-12-10 11:32:30.771175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.822 [2024-12-10 11:32:30.771192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:03.822 [2024-12-10 11:32:30.771202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.822 [2024-12-10 11:32:30.771212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.822 [2024-12-10 11:32:30.771321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.822 [2024-12-10 11:32:30.771335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:03.822 [2024-12-10 11:32:30.771345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.822 [2024-12-10 11:32:30.771355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.822 [2024-12-10 11:32:30.771391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.822 [2024-12-10 11:32:30.771404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:03.822 [2024-12-10 11:32:30.771418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.822 [2024-12-10 11:32:30.771428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.822 [2024-12-10 11:32:30.771467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.822 [2024-12-10 11:32:30.771478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:03.822 [2024-12-10 11:32:30.771488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.822 [2024-12-10 11:32:30.771498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.822 [2024-12-10 11:32:30.771542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.822 [2024-12-10 11:32:30.771557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:03.822 [2024-12-10 11:32:30.771568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.822 [2024-12-10 11:32:30.771577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.822 [2024-12-10 11:32:30.771718] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 495.965 ms, result 0 00:25:04.760 00:25:04.760 00:25:04.760 11:32:31 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:05.328 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:25:05.328 11:32:32 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:25:05.328 11:32:32 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:25:05.328 11:32:32 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:05.328 11:32:32 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:05.328 11:32:32 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:25:05.328 11:32:32 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:05.328 11:32:32 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78901 00:25:05.328 11:32:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78901 ']' 00:25:05.328 11:32:32 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78901 00:25:05.328 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78901) - No such process 00:25:05.328 Process with pid 78901 is not found 00:25:05.328 11:32:32 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78901 is not found' 00:25:05.328 00:25:05.328 real 1m10.967s 00:25:05.328 user 1m34.743s 00:25:05.328 sys 0m6.898s 00:25:05.328 11:32:32 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.328 11:32:32 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:05.328 ************************************ 00:25:05.328 END TEST ftl_trim 00:25:05.328 ************************************ 00:25:05.328 11:32:32 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:05.328 11:32:32 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:05.328 11:32:32 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.328 11:32:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:05.328 ************************************ 00:25:05.328 START TEST ftl_restore 00:25:05.328 ************************************ 00:25:05.328 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:05.588 * Looking for test storage... 00:25:05.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:05.588 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:05.588 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:25:05.588 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:05.588 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.588 11:32:32 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.589 11:32:32 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:25:05.589 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.589 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:05.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.589 --rc genhtml_branch_coverage=1 00:25:05.589 --rc genhtml_function_coverage=1 00:25:05.589 --rc genhtml_legend=1 00:25:05.589 --rc geninfo_all_blocks=1 00:25:05.589 --rc geninfo_unexecuted_blocks=1 00:25:05.589 00:25:05.589 ' 00:25:05.589 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:05.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.589 --rc genhtml_branch_coverage=1 00:25:05.589 --rc genhtml_function_coverage=1 00:25:05.589 --rc genhtml_legend=1 00:25:05.589 --rc geninfo_all_blocks=1 00:25:05.589 --rc geninfo_unexecuted_blocks=1 00:25:05.589 00:25:05.589 ' 00:25:05.589 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:05.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.589 --rc genhtml_branch_coverage=1 00:25:05.589 --rc genhtml_function_coverage=1 00:25:05.589 --rc genhtml_legend=1 00:25:05.589 --rc geninfo_all_blocks=1 00:25:05.589 --rc geninfo_unexecuted_blocks=1 00:25:05.589 00:25:05.589 ' 00:25:05.589 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:05.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.589 --rc genhtml_branch_coverage=1 00:25:05.589 --rc genhtml_function_coverage=1 00:25:05.589 --rc genhtml_legend=1 00:25:05.589 --rc geninfo_all_blocks=1 00:25:05.589 --rc geninfo_unexecuted_blocks=1 00:25:05.589 00:25:05.589 ' 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.jI1uUSLtAX 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79168 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:05.589 11:32:32 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79168 00:25:05.589 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79168 ']' 00:25:05.589 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.589 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.589 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.589 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.589 11:32:32 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:05.849 [2024-12-10 11:32:32.769811] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:25:05.849 [2024-12-10 11:32:32.770369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79168 ] 00:25:05.849 [2024-12-10 11:32:32.948135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.108 [2024-12-10 11:32:33.058798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.045 11:32:33 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.045 11:32:33 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:25:07.045 11:32:33 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:07.045 11:32:33 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:25:07.045 11:32:33 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:07.045 11:32:33 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:25:07.045 11:32:33 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:25:07.045 11:32:33 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:07.304 11:32:34 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:07.304 11:32:34 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:25:07.304 11:32:34 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:07.304 11:32:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:07.304 11:32:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:07.304 11:32:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:07.304 11:32:34 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:07.304 11:32:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:07.304 11:32:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:07.304 { 00:25:07.304 "name": "nvme0n1", 00:25:07.304 "aliases": [ 00:25:07.304 "5b4b03fa-053d-433c-aa48-15f26eea1225" 00:25:07.304 ], 00:25:07.304 "product_name": "NVMe disk", 00:25:07.304 "block_size": 4096, 00:25:07.304 "num_blocks": 1310720, 00:25:07.304 "uuid": "5b4b03fa-053d-433c-aa48-15f26eea1225", 00:25:07.304 "numa_id": -1, 00:25:07.304 "assigned_rate_limits": { 00:25:07.304 "rw_ios_per_sec": 0, 00:25:07.304 "rw_mbytes_per_sec": 0, 00:25:07.304 "r_mbytes_per_sec": 0, 00:25:07.304 "w_mbytes_per_sec": 0 00:25:07.304 }, 00:25:07.304 "claimed": true, 00:25:07.304 "claim_type": "read_many_write_one", 00:25:07.304 "zoned": false, 00:25:07.304 "supported_io_types": { 00:25:07.304 "read": true, 00:25:07.304 "write": true, 00:25:07.304 "unmap": true, 00:25:07.304 "flush": true, 00:25:07.304 "reset": true, 00:25:07.304 "nvme_admin": true, 00:25:07.304 "nvme_io": true, 00:25:07.304 "nvme_io_md": false, 00:25:07.304 "write_zeroes": true, 00:25:07.304 "zcopy": false, 00:25:07.304 "get_zone_info": false, 00:25:07.304 "zone_management": false, 00:25:07.304 "zone_append": false, 00:25:07.304 "compare": true, 00:25:07.304 "compare_and_write": false, 00:25:07.304 "abort": true, 00:25:07.304 "seek_hole": false, 00:25:07.304 "seek_data": false, 00:25:07.304 "copy": true, 00:25:07.304 "nvme_iov_md": false 00:25:07.304 }, 00:25:07.304 "driver_specific": { 00:25:07.304 "nvme": [ 00:25:07.304 { 00:25:07.304 "pci_address": "0000:00:11.0", 00:25:07.304 "trid": { 00:25:07.304 "trtype": "PCIe", 00:25:07.304 "traddr": "0000:00:11.0" 00:25:07.304 }, 00:25:07.304 "ctrlr_data": { 00:25:07.304 "cntlid": 0, 00:25:07.304 "vendor_id": "0x1b36", 00:25:07.304 "model_number": "QEMU NVMe Ctrl", 00:25:07.304 "serial_number": "12341", 00:25:07.304 "firmware_revision": "8.0.0", 00:25:07.304 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:07.304 "oacs": { 00:25:07.304 "security": 0, 00:25:07.304 "format": 1, 00:25:07.304 "firmware": 0, 00:25:07.304 "ns_manage": 1 00:25:07.304 }, 00:25:07.304 "multi_ctrlr": false, 00:25:07.304 "ana_reporting": false 00:25:07.304 }, 00:25:07.304 "vs": { 00:25:07.305 "nvme_version": "1.4" 00:25:07.305 }, 00:25:07.305 "ns_data": { 00:25:07.305 "id": 1, 00:25:07.305 "can_share": false 00:25:07.305 } 00:25:07.305 } 00:25:07.305 ], 00:25:07.305 "mp_policy": "active_passive" 00:25:07.305 } 00:25:07.305 } 00:25:07.305 ]' 00:25:07.305 11:32:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:07.563 11:32:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:07.563 11:32:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:07.563 11:32:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:07.563 11:32:34 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:07.563 11:32:34 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:25:07.563 11:32:34 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:25:07.563 11:32:34 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:07.563 11:32:34 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:25:07.563 11:32:34 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:07.563 11:32:34 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:07.822 11:32:34 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=7f923bc7-3320-43d7-a1bf-53e701987368 00:25:07.822 11:32:34 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:25:07.822 11:32:34 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7f923bc7-3320-43d7-a1bf-53e701987368 00:25:08.080 11:32:34 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:08.080 11:32:35 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=1cd90635-26e9-4370-9434-52315d0c6900 00:25:08.080 11:32:35 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1cd90635-26e9-4370-9434-52315d0c6900 00:25:08.339 11:32:35 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=6215b262-9879-4d5e-8bcc-c76e89653870 00:25:08.339 11:32:35 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:25:08.339 11:32:35 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6215b262-9879-4d5e-8bcc-c76e89653870 00:25:08.339 11:32:35 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:25:08.339 11:32:35 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:08.339 11:32:35 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=6215b262-9879-4d5e-8bcc-c76e89653870 00:25:08.339 11:32:35 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:25:08.339 11:32:35 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 6215b262-9879-4d5e-8bcc-c76e89653870 00:25:08.339 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6215b262-9879-4d5e-8bcc-c76e89653870 00:25:08.339 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:08.339 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:08.339 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:08.339 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6215b262-9879-4d5e-8bcc-c76e89653870 00:25:08.598 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:08.598 { 00:25:08.598 "name": "6215b262-9879-4d5e-8bcc-c76e89653870", 00:25:08.598 "aliases": [ 00:25:08.598 "lvs/nvme0n1p0" 00:25:08.598 ], 00:25:08.598 "product_name": "Logical Volume", 00:25:08.598 "block_size": 4096, 00:25:08.598 "num_blocks": 26476544, 00:25:08.598 "uuid": "6215b262-9879-4d5e-8bcc-c76e89653870", 00:25:08.598 "assigned_rate_limits": { 00:25:08.598 "rw_ios_per_sec": 0, 00:25:08.598 "rw_mbytes_per_sec": 0, 00:25:08.598 "r_mbytes_per_sec": 0, 00:25:08.598 "w_mbytes_per_sec": 0 00:25:08.598 }, 00:25:08.598 "claimed": false, 00:25:08.598 "zoned": false, 00:25:08.598 "supported_io_types": { 00:25:08.598 "read": true, 00:25:08.598 "write": true, 00:25:08.598 "unmap": true, 00:25:08.598 "flush": false, 00:25:08.598 "reset": true, 00:25:08.598 "nvme_admin": false, 00:25:08.598 "nvme_io": false, 00:25:08.598 "nvme_io_md": false, 00:25:08.598 "write_zeroes": true, 00:25:08.598 "zcopy": false, 00:25:08.598 "get_zone_info": false, 00:25:08.598 "zone_management": false, 00:25:08.598 "zone_append": false, 00:25:08.598 "compare": false, 00:25:08.598 "compare_and_write": false, 00:25:08.598 "abort": false, 00:25:08.598 "seek_hole": true, 00:25:08.598 "seek_data": true, 00:25:08.598 "copy": false, 00:25:08.598 "nvme_iov_md": false 00:25:08.598 }, 00:25:08.598 "driver_specific": { 00:25:08.598 "lvol": { 00:25:08.598 "lvol_store_uuid": "1cd90635-26e9-4370-9434-52315d0c6900", 00:25:08.598 "base_bdev": "nvme0n1", 00:25:08.598 "thin_provision": true, 00:25:08.598 "num_allocated_clusters": 0, 00:25:08.598 "snapshot": false, 00:25:08.598 "clone": false, 00:25:08.598 "esnap_clone": false 00:25:08.598 } 00:25:08.598 } 00:25:08.598 } 00:25:08.598 ]' 00:25:08.598 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:08.598 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:08.598 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:08.598 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:08.598 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:08.598 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:08.598 11:32:35 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:25:08.598 11:32:35 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:25:08.598 11:32:35 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:08.857 11:32:35 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:08.857 11:32:35 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:08.857 11:32:35 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 6215b262-9879-4d5e-8bcc-c76e89653870 00:25:08.857 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6215b262-9879-4d5e-8bcc-c76e89653870 00:25:08.857 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:08.857 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:08.857 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:08.857 11:32:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6215b262-9879-4d5e-8bcc-c76e89653870 00:25:09.116 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:09.116 { 00:25:09.116 "name": "6215b262-9879-4d5e-8bcc-c76e89653870", 00:25:09.116 "aliases": [ 00:25:09.116 "lvs/nvme0n1p0" 00:25:09.116 ], 00:25:09.116 "product_name": "Logical Volume", 00:25:09.116 "block_size": 4096, 00:25:09.116 "num_blocks": 26476544, 00:25:09.116 "uuid": "6215b262-9879-4d5e-8bcc-c76e89653870", 00:25:09.116 "assigned_rate_limits": { 00:25:09.116 "rw_ios_per_sec": 0, 00:25:09.116 "rw_mbytes_per_sec": 0, 00:25:09.116 "r_mbytes_per_sec": 0, 00:25:09.116 "w_mbytes_per_sec": 0 00:25:09.116 }, 00:25:09.116 "claimed": false, 00:25:09.116 "zoned": false, 00:25:09.116 "supported_io_types": { 00:25:09.116 "read": true, 00:25:09.116 "write": true, 00:25:09.116 "unmap": true, 00:25:09.116 "flush": false, 00:25:09.116 "reset": true, 00:25:09.116 "nvme_admin": false, 00:25:09.116 "nvme_io": false, 00:25:09.116 "nvme_io_md": false, 00:25:09.116 "write_zeroes": true, 00:25:09.116 "zcopy": false, 00:25:09.116 "get_zone_info": false, 00:25:09.116 "zone_management": false, 00:25:09.116 "zone_append": false, 00:25:09.116 "compare": false, 00:25:09.116 "compare_and_write": false, 00:25:09.116 "abort": false, 00:25:09.116 "seek_hole": true, 00:25:09.116 "seek_data": true, 00:25:09.116 "copy": false, 00:25:09.116 "nvme_iov_md": false 00:25:09.116 }, 00:25:09.116 "driver_specific": { 00:25:09.116 "lvol": { 00:25:09.116 "lvol_store_uuid": "1cd90635-26e9-4370-9434-52315d0c6900", 00:25:09.116 "base_bdev": "nvme0n1", 00:25:09.116 "thin_provision": true, 00:25:09.116 "num_allocated_clusters": 0, 00:25:09.116 "snapshot": false, 00:25:09.116 "clone": false, 00:25:09.116 "esnap_clone": false 00:25:09.116 } 00:25:09.116 } 00:25:09.116 } 00:25:09.116 ]' 00:25:09.116 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:09.116 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:09.116 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:09.116 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:09.116 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:09.116 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:09.116 11:32:36 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:25:09.116 11:32:36 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:09.375 11:32:36 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:25:09.375 11:32:36 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 6215b262-9879-4d5e-8bcc-c76e89653870 00:25:09.375 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6215b262-9879-4d5e-8bcc-c76e89653870 00:25:09.375 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:09.375 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:09.375 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:09.375 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6215b262-9879-4d5e-8bcc-c76e89653870 00:25:09.635 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:09.635 { 00:25:09.635 "name": "6215b262-9879-4d5e-8bcc-c76e89653870", 00:25:09.635 "aliases": [ 00:25:09.635 "lvs/nvme0n1p0" 00:25:09.635 ], 00:25:09.635 "product_name": "Logical Volume", 00:25:09.635 "block_size": 4096, 00:25:09.635 "num_blocks": 26476544, 00:25:09.635 "uuid": "6215b262-9879-4d5e-8bcc-c76e89653870", 00:25:09.635 "assigned_rate_limits": { 00:25:09.635 "rw_ios_per_sec": 0, 00:25:09.635 "rw_mbytes_per_sec": 0, 00:25:09.635 "r_mbytes_per_sec": 0, 00:25:09.635 "w_mbytes_per_sec": 0 00:25:09.635 }, 00:25:09.635 "claimed": false, 00:25:09.635 "zoned": false, 00:25:09.635 "supported_io_types": { 00:25:09.635 "read": true, 00:25:09.635 "write": true, 00:25:09.635 "unmap": true, 00:25:09.635 "flush": false, 00:25:09.635 "reset": true, 00:25:09.635 "nvme_admin": false, 00:25:09.635 "nvme_io": false, 00:25:09.635 "nvme_io_md": false, 00:25:09.635 "write_zeroes": true, 00:25:09.635 "zcopy": false, 00:25:09.635 "get_zone_info": false, 00:25:09.635 "zone_management": false, 00:25:09.635 "zone_append": false, 00:25:09.635 "compare": false, 00:25:09.635 "compare_and_write": false, 00:25:09.635 "abort": false, 00:25:09.635 "seek_hole": true, 00:25:09.635 "seek_data": true, 00:25:09.635 "copy": false, 00:25:09.635 "nvme_iov_md": false 00:25:09.635 }, 00:25:09.635 "driver_specific": { 00:25:09.635 "lvol": { 00:25:09.635 "lvol_store_uuid": "1cd90635-26e9-4370-9434-52315d0c6900", 00:25:09.635 "base_bdev": "nvme0n1", 00:25:09.635 "thin_provision": true, 00:25:09.635 "num_allocated_clusters": 0, 00:25:09.635 "snapshot": false, 00:25:09.635 "clone": false, 00:25:09.635 "esnap_clone": false 00:25:09.635 } 00:25:09.635 } 00:25:09.635 } 00:25:09.635 ]' 00:25:09.635 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:09.635 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:09.635 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:09.635 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:09.635 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:09.635 11:32:36 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:09.635 11:32:36 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:25:09.635 11:32:36 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6215b262-9879-4d5e-8bcc-c76e89653870 --l2p_dram_limit 10' 00:25:09.635 11:32:36 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:25:09.635 11:32:36 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:09.635 11:32:36 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:09.635 11:32:36 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:25:09.635 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:25:09.635 11:32:36 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6215b262-9879-4d5e-8bcc-c76e89653870 --l2p_dram_limit 10 -c nvc0n1p0 00:25:09.895 [2024-12-10 11:32:36.903109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.895 [2024-12-10 11:32:36.903161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:09.895 [2024-12-10 11:32:36.903180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:09.895 [2024-12-10 11:32:36.903191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.895 [2024-12-10 11:32:36.903254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.895 [2024-12-10 11:32:36.903265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:09.895 [2024-12-10 11:32:36.903278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:09.895 [2024-12-10 11:32:36.903288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.895 [2024-12-10 11:32:36.903318] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:09.895 [2024-12-10 11:32:36.904279] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:09.895 [2024-12-10 11:32:36.904313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.895 [2024-12-10 11:32:36.904325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:09.895 [2024-12-10 11:32:36.904338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:25:09.895 [2024-12-10 11:32:36.904347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.895 [2024-12-10 11:32:36.904427] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d3773636-b766-4115-80d1-23bd7ec89892 00:25:09.895 [2024-12-10 11:32:36.905832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.895 [2024-12-10 11:32:36.905871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:09.895 [2024-12-10 11:32:36.905883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:09.895 [2024-12-10 11:32:36.905896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.895 [2024-12-10 11:32:36.913401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.895 [2024-12-10 11:32:36.913446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:09.895 [2024-12-10 11:32:36.913458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.463 ms 00:25:09.895 [2024-12-10 11:32:36.913470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.895 [2024-12-10 11:32:36.913565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.895 [2024-12-10 11:32:36.913581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:09.895 [2024-12-10 11:32:36.913593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:25:09.895 [2024-12-10 11:32:36.913610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.895 [2024-12-10 11:32:36.913677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.895 [2024-12-10 11:32:36.913693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:09.895 [2024-12-10 11:32:36.913706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:09.895 [2024-12-10 11:32:36.913718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.895 [2024-12-10 11:32:36.913742] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:09.895 [2024-12-10 11:32:36.918898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.895 [2024-12-10 11:32:36.918938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:09.895 [2024-12-10 11:32:36.918954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.168 ms 00:25:09.895 [2024-12-10 11:32:36.918964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.895 [2024-12-10 11:32:36.919003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.896 [2024-12-10 11:32:36.919014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:09.896 [2024-12-10 11:32:36.919027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:09.896 [2024-12-10 11:32:36.919036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.896 [2024-12-10 11:32:36.919072] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:09.896 [2024-12-10 11:32:36.919205] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:09.896 [2024-12-10 11:32:36.919225] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:09.896 [2024-12-10 11:32:36.919238] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:09.896 [2024-12-10 11:32:36.919253] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:09.896 [2024-12-10 11:32:36.919265] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:09.896 [2024-12-10 11:32:36.919279] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:09.896 [2024-12-10 11:32:36.919288] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:09.896 [2024-12-10 11:32:36.919304] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:09.896 [2024-12-10 11:32:36.919313] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:09.896 [2024-12-10 11:32:36.919325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.896 [2024-12-10 11:32:36.919345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:09.896 [2024-12-10 11:32:36.919358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:25:09.896 [2024-12-10 11:32:36.919368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.896 [2024-12-10 11:32:36.919441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.896 [2024-12-10 11:32:36.919452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:09.896 [2024-12-10 11:32:36.919464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:09.896 [2024-12-10 11:32:36.919473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.896 [2024-12-10 11:32:36.919556] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:09.896 [2024-12-10 11:32:36.919569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:09.896 [2024-12-10 11:32:36.919581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.896 [2024-12-10 11:32:36.919592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.896 [2024-12-10 11:32:36.919603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:09.896 [2024-12-10 11:32:36.919612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:09.896 [2024-12-10 11:32:36.919623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:09.896 [2024-12-10 11:32:36.919632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:09.896 [2024-12-10 11:32:36.919643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:09.896 [2024-12-10 11:32:36.919652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.896 [2024-12-10 11:32:36.919663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:09.896 [2024-12-10 11:32:36.919672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:09.896 [2024-12-10 11:32:36.919685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.896 [2024-12-10 11:32:36.919694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:09.896 [2024-12-10 11:32:36.919704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:09.896 [2024-12-10 11:32:36.919714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.896 [2024-12-10 11:32:36.919727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:09.896 [2024-12-10 11:32:36.919735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:09.896 [2024-12-10 11:32:36.919746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.896 [2024-12-10 11:32:36.919754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:09.896 [2024-12-10 11:32:36.919764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:09.896 [2024-12-10 11:32:36.919772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.896 [2024-12-10 11:32:36.919783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:09.896 [2024-12-10 11:32:36.919792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:09.896 [2024-12-10 11:32:36.919803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.896 [2024-12-10 11:32:36.919811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:09.896 [2024-12-10 11:32:36.919821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:09.896 [2024-12-10 11:32:36.919829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.896 [2024-12-10 11:32:36.919841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:09.896 [2024-12-10 11:32:36.919850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:09.896 [2024-12-10 11:32:36.919861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.896 [2024-12-10 11:32:36.919869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:09.896 [2024-12-10 11:32:36.919882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:09.896 [2024-12-10 11:32:36.919890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.896 [2024-12-10 11:32:36.919901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:09.896 [2024-12-10 11:32:36.919909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:09.896 [2024-12-10 11:32:36.919934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.896 [2024-12-10 11:32:36.919942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:09.896 [2024-12-10 11:32:36.919954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:09.896 [2024-12-10 11:32:36.919963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.896 [2024-12-10 11:32:36.919974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:09.896 [2024-12-10 11:32:36.919982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:09.896 [2024-12-10 11:32:36.919993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.896 [2024-12-10 11:32:36.920001] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:09.896 [2024-12-10 11:32:36.920013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:09.896 [2024-12-10 11:32:36.920022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.896 [2024-12-10 11:32:36.920034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.896 [2024-12-10 11:32:36.920045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:09.896 [2024-12-10 11:32:36.920059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:09.896 [2024-12-10 11:32:36.920068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:09.896 [2024-12-10 11:32:36.920079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:09.896 [2024-12-10 11:32:36.920088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:09.896 [2024-12-10 11:32:36.920099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:09.896 [2024-12-10 11:32:36.920110] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:09.896 [2024-12-10 11:32:36.920127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.896 [2024-12-10 11:32:36.920138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:09.896 [2024-12-10 11:32:36.920151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:09.896 [2024-12-10 11:32:36.920161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:09.896 [2024-12-10 11:32:36.920173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:09.896 [2024-12-10 11:32:36.920183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:09.896 [2024-12-10 11:32:36.920196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:09.896 [2024-12-10 11:32:36.920205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:09.896 [2024-12-10 11:32:36.920217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:09.896 [2024-12-10 11:32:36.920226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:09.896 [2024-12-10 11:32:36.920242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:09.896 [2024-12-10 11:32:36.920252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:09.896 [2024-12-10 11:32:36.920263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:09.896 [2024-12-10 11:32:36.920273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:09.896 [2024-12-10 11:32:36.920285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:09.896 [2024-12-10 11:32:36.920294] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:09.896 [2024-12-10 11:32:36.920307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.896 [2024-12-10 11:32:36.920317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:09.896 [2024-12-10 11:32:36.920329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:09.896 [2024-12-10 11:32:36.920339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:09.896 [2024-12-10 11:32:36.920351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:09.896 [2024-12-10 11:32:36.920362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.896 [2024-12-10 11:32:36.920374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:09.896 [2024-12-10 11:32:36.920384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:25:09.896 [2024-12-10 11:32:36.920395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.897 [2024-12-10 11:32:36.920434] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:09.897 [2024-12-10 11:32:36.920451] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:14.149 [2024-12-10 11:32:40.675373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.149 [2024-12-10 11:32:40.675437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:14.149 [2024-12-10 11:32:40.675455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3761.033 ms 00:25:14.149 [2024-12-10 11:32:40.675467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.149 [2024-12-10 11:32:40.712754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.149 [2024-12-10 11:32:40.712808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:14.149 [2024-12-10 11:32:40.712824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.094 ms 00:25:14.149 [2024-12-10 11:32:40.712837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.149 [2024-12-10 11:32:40.712964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.149 [2024-12-10 11:32:40.712982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:14.149 [2024-12-10 11:32:40.712994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:14.149 [2024-12-10 11:32:40.713061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.149 [2024-12-10 11:32:40.758059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.149 [2024-12-10 11:32:40.758106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:14.149 [2024-12-10 11:32:40.758121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.030 ms 00:25:14.149 [2024-12-10 11:32:40.758133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.149 [2024-12-10 11:32:40.758167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.149 [2024-12-10 11:32:40.758185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:14.149 [2024-12-10 11:32:40.758195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:14.149 [2024-12-10 11:32:40.758217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.149 [2024-12-10 11:32:40.758698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.149 [2024-12-10 11:32:40.758725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:14.149 [2024-12-10 11:32:40.758736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:25:14.149 [2024-12-10 11:32:40.758749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.149 [2024-12-10 11:32:40.758840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.149 [2024-12-10 11:32:40.758858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:14.149 [2024-12-10 11:32:40.758872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:14.149 [2024-12-10 11:32:40.758886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.149 [2024-12-10 11:32:40.778037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.149 [2024-12-10 11:32:40.778080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:14.150 [2024-12-10 11:32:40.778094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.161 ms 00:25:14.150 [2024-12-10 11:32:40.778106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.150 [2024-12-10 11:32:40.817024] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:14.150 [2024-12-10 11:32:40.820989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.150 [2024-12-10 11:32:40.821032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:14.150 [2024-12-10 11:32:40.821055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.869 ms 00:25:14.150 [2024-12-10 11:32:40.821069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.150 [2024-12-10 11:32:40.920039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.150 [2024-12-10 11:32:40.920094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:14.150 [2024-12-10 11:32:40.920112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.078 ms 00:25:14.150 [2024-12-10 11:32:40.920123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.150 [2024-12-10 11:32:40.920299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.150 [2024-12-10 11:32:40.920318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:14.150 [2024-12-10 11:32:40.920335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:25:14.150 [2024-12-10 11:32:40.920344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.150 [2024-12-10 11:32:40.954161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.150 [2024-12-10 11:32:40.954200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:14.150 [2024-12-10 11:32:40.954216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.818 ms 00:25:14.150 [2024-12-10 11:32:40.954226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.150 [2024-12-10 11:32:40.987787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.150 [2024-12-10 11:32:40.987823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:14.150 [2024-12-10 11:32:40.987840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.566 ms 00:25:14.150 [2024-12-10 11:32:40.987850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.150 [2024-12-10 11:32:40.988525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.150 [2024-12-10 11:32:40.988553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:14.150 [2024-12-10 11:32:40.988567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:25:14.150 [2024-12-10 11:32:40.988579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.150 [2024-12-10 11:32:41.085325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.150 [2024-12-10 11:32:41.085365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:14.150 [2024-12-10 11:32:41.085386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.844 ms 00:25:14.150 [2024-12-10 11:32:41.085397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.150 [2024-12-10 11:32:41.121486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.150 [2024-12-10 11:32:41.121523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:14.150 [2024-12-10 11:32:41.121539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.054 ms 00:25:14.150 [2024-12-10 11:32:41.121548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.150 [2024-12-10 11:32:41.155412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.150 [2024-12-10 11:32:41.155448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:14.150 [2024-12-10 11:32:41.155464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.873 ms 00:25:14.150 [2024-12-10 11:32:41.155473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.150 [2024-12-10 11:32:41.189435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.150 [2024-12-10 11:32:41.189472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:14.150 [2024-12-10 11:32:41.189488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.968 ms 00:25:14.150 [2024-12-10 11:32:41.189497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.150 [2024-12-10 11:32:41.189542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.150 [2024-12-10 11:32:41.189555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:14.150 [2024-12-10 11:32:41.189572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:14.150 [2024-12-10 11:32:41.189582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.150 [2024-12-10 11:32:41.189683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.150 [2024-12-10 11:32:41.189700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:14.150 [2024-12-10 11:32:41.189713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:14.150 [2024-12-10 11:32:41.189722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.150 [2024-12-10 11:32:41.190775] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4294.205 ms, result 0 00:25:14.150 { 00:25:14.150 "name": "ftl0", 00:25:14.150 "uuid": "d3773636-b766-4115-80d1-23bd7ec89892" 00:25:14.150 } 00:25:14.150 11:32:41 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:25:14.150 11:32:41 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:14.409 11:32:41 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:25:14.409 11:32:41 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:14.669 [2024-12-10 11:32:41.565666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.669 [2024-12-10 11:32:41.565722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:14.669 [2024-12-10 11:32:41.565736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:14.669 [2024-12-10 11:32:41.565750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.669 [2024-12-10 11:32:41.565773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:14.669 [2024-12-10 11:32:41.569629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.669 [2024-12-10 11:32:41.569659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:14.669 [2024-12-10 11:32:41.569673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.841 ms 00:25:14.669 [2024-12-10 11:32:41.569684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.669 [2024-12-10 11:32:41.569938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.669 [2024-12-10 11:32:41.569957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:14.669 [2024-12-10 11:32:41.569970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:25:14.669 [2024-12-10 11:32:41.569981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.669 [2024-12-10 11:32:41.572398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.669 [2024-12-10 11:32:41.572422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:14.669 [2024-12-10 11:32:41.572434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.403 ms 00:25:14.669 [2024-12-10 11:32:41.572444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.669 [2024-12-10 11:32:41.577126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.669 [2024-12-10 11:32:41.577157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:14.669 [2024-12-10 11:32:41.577175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.669 ms 00:25:14.669 [2024-12-10 11:32:41.577185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.669 [2024-12-10 11:32:41.611289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.669 [2024-12-10 11:32:41.611328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:14.669 [2024-12-10 11:32:41.611345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.092 ms 00:25:14.669 [2024-12-10 11:32:41.611355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.669 [2024-12-10 11:32:41.632415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.669 [2024-12-10 11:32:41.632454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:14.669 [2024-12-10 11:32:41.632470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.048 ms 00:25:14.669 [2024-12-10 11:32:41.632480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.669 [2024-12-10 11:32:41.632622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.669 [2024-12-10 11:32:41.632636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:14.669 [2024-12-10 11:32:41.632649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:25:14.669 [2024-12-10 11:32:41.632659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.669 [2024-12-10 11:32:41.666497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.669 [2024-12-10 11:32:41.666545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:14.669 [2024-12-10 11:32:41.666561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.869 ms 00:25:14.670 [2024-12-10 11:32:41.666570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.670 [2024-12-10 11:32:41.699601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.670 [2024-12-10 11:32:41.699635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:14.670 [2024-12-10 11:32:41.699651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.040 ms 00:25:14.670 [2024-12-10 11:32:41.699660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.670 [2024-12-10 11:32:41.732380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.670 [2024-12-10 11:32:41.732415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:14.670 [2024-12-10 11:32:41.732430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.726 ms 00:25:14.670 [2024-12-10 11:32:41.732439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.670 [2024-12-10 11:32:41.765241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.670 [2024-12-10 11:32:41.765276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:14.670 [2024-12-10 11:32:41.765291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.763 ms 00:25:14.670 [2024-12-10 11:32:41.765300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.670 [2024-12-10 11:32:41.765340] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:14.670 [2024-12-10 11:32:41.765355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.765999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:14.670 [2024-12-10 11:32:41.766274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:14.671 [2024-12-10 11:32:41.766555] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:14.671 [2024-12-10 11:32:41.766567] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d3773636-b766-4115-80d1-23bd7ec89892 00:25:14.671 [2024-12-10 11:32:41.766577] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:14.671 [2024-12-10 11:32:41.766591] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:14.671 [2024-12-10 11:32:41.766603] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:14.671 [2024-12-10 11:32:41.766614] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:14.671 [2024-12-10 11:32:41.766623] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:14.671 [2024-12-10 11:32:41.766635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:14.671 [2024-12-10 11:32:41.766644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:14.671 [2024-12-10 11:32:41.766656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:14.671 [2024-12-10 11:32:41.766665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:14.671 [2024-12-10 11:32:41.766677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.671 [2024-12-10 11:32:41.766687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:14.671 [2024-12-10 11:32:41.766699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.340 ms 00:25:14.671 [2024-12-10 11:32:41.766711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.930 [2024-12-10 11:32:41.784955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.930 [2024-12-10 11:32:41.784988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:14.930 [2024-12-10 11:32:41.785002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.221 ms 00:25:14.930 [2024-12-10 11:32:41.785012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.930 [2024-12-10 11:32:41.785527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.930 [2024-12-10 11:32:41.785552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:14.930 [2024-12-10 11:32:41.785569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:25:14.930 [2024-12-10 11:32:41.785579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.930 [2024-12-10 11:32:41.847234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.930 [2024-12-10 11:32:41.847270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:14.930 [2024-12-10 11:32:41.847285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.930 [2024-12-10 11:32:41.847295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.930 [2024-12-10 11:32:41.847351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.930 [2024-12-10 11:32:41.847361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:14.930 [2024-12-10 11:32:41.847377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.930 [2024-12-10 11:32:41.847387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.930 [2024-12-10 11:32:41.847477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.930 [2024-12-10 11:32:41.847491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:14.930 [2024-12-10 11:32:41.847507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.930 [2024-12-10 11:32:41.847516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.930 [2024-12-10 11:32:41.847540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.930 [2024-12-10 11:32:41.847551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:14.930 [2024-12-10 11:32:41.847562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.930 [2024-12-10 11:32:41.847575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.930 [2024-12-10 11:32:41.960803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.930 [2024-12-10 11:32:41.960850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:14.930 [2024-12-10 11:32:41.960866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.930 [2024-12-10 11:32:41.960876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.189 [2024-12-10 11:32:42.053386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.189 [2024-12-10 11:32:42.053438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:15.189 [2024-12-10 11:32:42.053455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.189 [2024-12-10 11:32:42.053468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.189 [2024-12-10 11:32:42.053571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.189 [2024-12-10 11:32:42.053585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:15.189 [2024-12-10 11:32:42.053598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.189 [2024-12-10 11:32:42.053608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.189 [2024-12-10 11:32:42.053661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.189 [2024-12-10 11:32:42.053673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:15.189 [2024-12-10 11:32:42.053686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.189 [2024-12-10 11:32:42.053697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.189 [2024-12-10 11:32:42.053807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.189 [2024-12-10 11:32:42.053821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:15.189 [2024-12-10 11:32:42.053833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.189 [2024-12-10 11:32:42.053843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.189 [2024-12-10 11:32:42.053885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.189 [2024-12-10 11:32:42.053897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:15.189 [2024-12-10 11:32:42.053910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.189 [2024-12-10 11:32:42.053939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.189 [2024-12-10 11:32:42.053986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.189 [2024-12-10 11:32:42.053997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:15.189 [2024-12-10 11:32:42.054010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.189 [2024-12-10 11:32:42.054021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.189 [2024-12-10 11:32:42.054069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.189 [2024-12-10 11:32:42.054080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:15.189 [2024-12-10 11:32:42.054092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.189 [2024-12-10 11:32:42.054103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.189 [2024-12-10 11:32:42.054236] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 489.322 ms, result 0 00:25:15.189 true 00:25:15.189 11:32:42 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79168 00:25:15.189 11:32:42 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79168 ']' 00:25:15.189 11:32:42 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79168 00:25:15.189 11:32:42 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:25:15.189 11:32:42 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:15.189 11:32:42 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79168 00:25:15.189 11:32:42 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:15.189 11:32:42 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:15.189 killing process with pid 79168 00:25:15.189 11:32:42 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79168' 00:25:15.189 11:32:42 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79168 00:25:15.189 11:32:42 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79168 00:25:20.462 11:32:46 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:25:23.749 262144+0 records in 00:25:23.749 262144+0 records out 00:25:23.749 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.93063 s, 273 MB/s 00:25:23.749 11:32:50 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:25.124 11:32:52 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:25.384 [2024-12-10 11:32:52.317041] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:25:25.384 [2024-12-10 11:32:52.317194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79410 ] 00:25:25.642 [2024-12-10 11:32:52.500227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.642 [2024-12-10 11:32:52.608390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.901 [2024-12-10 11:32:52.971161] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:25.901 [2024-12-10 11:32:52.971233] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:26.162 [2024-12-10 11:32:53.133324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.162 [2024-12-10 11:32:53.133375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:26.162 [2024-12-10 11:32:53.133389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:26.162 [2024-12-10 11:32:53.133399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.162 [2024-12-10 11:32:53.133453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.162 [2024-12-10 11:32:53.133467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:26.162 [2024-12-10 11:32:53.133478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:26.162 [2024-12-10 11:32:53.133487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.162 [2024-12-10 11:32:53.133507] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:26.162 [2024-12-10 11:32:53.134414] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:26.162 [2024-12-10 11:32:53.134442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.162 [2024-12-10 11:32:53.134453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:26.162 [2024-12-10 11:32:53.134464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:25:26.162 [2024-12-10 11:32:53.134474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.162 [2024-12-10 11:32:53.135943] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:26.162 [2024-12-10 11:32:53.154061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.162 [2024-12-10 11:32:53.154104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:26.162 [2024-12-10 11:32:53.154118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.148 ms 00:25:26.162 [2024-12-10 11:32:53.154129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.162 [2024-12-10 11:32:53.154199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.162 [2024-12-10 11:32:53.154213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:26.162 [2024-12-10 11:32:53.154223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:26.162 [2024-12-10 11:32:53.154233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.162 [2024-12-10 11:32:53.161036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.162 [2024-12-10 11:32:53.161065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:26.162 [2024-12-10 11:32:53.161076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.747 ms 00:25:26.162 [2024-12-10 11:32:53.161089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.162 [2024-12-10 11:32:53.161160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.162 [2024-12-10 11:32:53.161172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:26.162 [2024-12-10 11:32:53.161183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:26.162 [2024-12-10 11:32:53.161193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.162 [2024-12-10 11:32:53.161248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.162 [2024-12-10 11:32:53.161262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:26.162 [2024-12-10 11:32:53.161273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:26.162 [2024-12-10 11:32:53.161283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.162 [2024-12-10 11:32:53.161312] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:26.162 [2024-12-10 11:32:53.165967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.162 [2024-12-10 11:32:53.166003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:26.162 [2024-12-10 11:32:53.166018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.667 ms 00:25:26.162 [2024-12-10 11:32:53.166028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.162 [2024-12-10 11:32:53.166059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.162 [2024-12-10 11:32:53.166070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:26.162 [2024-12-10 11:32:53.166079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:26.162 [2024-12-10 11:32:53.166088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.162 [2024-12-10 11:32:53.166135] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:26.162 [2024-12-10 11:32:53.166228] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:26.162 [2024-12-10 11:32:53.166265] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:26.162 [2024-12-10 11:32:53.166301] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:26.162 [2024-12-10 11:32:53.166389] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:26.162 [2024-12-10 11:32:53.166402] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:26.162 [2024-12-10 11:32:53.166415] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:26.162 [2024-12-10 11:32:53.166428] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:26.162 [2024-12-10 11:32:53.166440] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:26.162 [2024-12-10 11:32:53.166450] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:26.162 [2024-12-10 11:32:53.166460] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:26.162 [2024-12-10 11:32:53.166473] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:26.162 [2024-12-10 11:32:53.166483] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:26.162 [2024-12-10 11:32:53.166495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.162 [2024-12-10 11:32:53.166505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:26.162 [2024-12-10 11:32:53.166515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:25:26.162 [2024-12-10 11:32:53.166525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.162 [2024-12-10 11:32:53.166595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.162 [2024-12-10 11:32:53.166606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:26.162 [2024-12-10 11:32:53.166616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:26.162 [2024-12-10 11:32:53.166625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.162 [2024-12-10 11:32:53.166711] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:26.162 [2024-12-10 11:32:53.166724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:26.162 [2024-12-10 11:32:53.166735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:26.162 [2024-12-10 11:32:53.166744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.162 [2024-12-10 11:32:53.166755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:26.162 [2024-12-10 11:32:53.166764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:26.162 [2024-12-10 11:32:53.166773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:26.162 [2024-12-10 11:32:53.166782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:26.162 [2024-12-10 11:32:53.166791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:26.162 [2024-12-10 11:32:53.166800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:26.162 [2024-12-10 11:32:53.166811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:26.162 [2024-12-10 11:32:53.166821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:26.162 [2024-12-10 11:32:53.166831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:26.162 [2024-12-10 11:32:53.166850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:26.163 [2024-12-10 11:32:53.166860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:26.163 [2024-12-10 11:32:53.166870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.163 [2024-12-10 11:32:53.166879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:26.163 [2024-12-10 11:32:53.166888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:26.163 [2024-12-10 11:32:53.166898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.163 [2024-12-10 11:32:53.166908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:26.163 [2024-12-10 11:32:53.166917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:26.163 [2024-12-10 11:32:53.166926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:26.163 [2024-12-10 11:32:53.166951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:26.163 [2024-12-10 11:32:53.166961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:26.163 [2024-12-10 11:32:53.166970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:26.163 [2024-12-10 11:32:53.166984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:26.163 [2024-12-10 11:32:53.166994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:26.163 [2024-12-10 11:32:53.167003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:26.163 [2024-12-10 11:32:53.167013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:26.163 [2024-12-10 11:32:53.167022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:26.163 [2024-12-10 11:32:53.167032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:26.163 [2024-12-10 11:32:53.167041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:26.163 [2024-12-10 11:32:53.167051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:26.163 [2024-12-10 11:32:53.167060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:26.163 [2024-12-10 11:32:53.167069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:26.163 [2024-12-10 11:32:53.167078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:26.163 [2024-12-10 11:32:53.167087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:26.163 [2024-12-10 11:32:53.167096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:26.163 [2024-12-10 11:32:53.167105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:26.163 [2024-12-10 11:32:53.167113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.163 [2024-12-10 11:32:53.167122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:26.163 [2024-12-10 11:32:53.167131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:26.163 [2024-12-10 11:32:53.167141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.163 [2024-12-10 11:32:53.167150] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:26.163 [2024-12-10 11:32:53.167160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:26.163 [2024-12-10 11:32:53.167169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:26.163 [2024-12-10 11:32:53.167179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.163 [2024-12-10 11:32:53.167188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:26.163 [2024-12-10 11:32:53.167198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:26.163 [2024-12-10 11:32:53.167208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:26.163 [2024-12-10 11:32:53.167218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:26.163 [2024-12-10 11:32:53.167227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:26.163 [2024-12-10 11:32:53.167238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:26.163 [2024-12-10 11:32:53.167248] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:26.163 [2024-12-10 11:32:53.167260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:26.163 [2024-12-10 11:32:53.167275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:26.163 [2024-12-10 11:32:53.167286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:26.163 [2024-12-10 11:32:53.167296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:26.163 [2024-12-10 11:32:53.167306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:26.163 [2024-12-10 11:32:53.167316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:26.163 [2024-12-10 11:32:53.167326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:26.163 [2024-12-10 11:32:53.167336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:26.163 [2024-12-10 11:32:53.167346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:26.163 [2024-12-10 11:32:53.167355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:26.163 [2024-12-10 11:32:53.167365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:26.163 [2024-12-10 11:32:53.167375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:26.163 [2024-12-10 11:32:53.167385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:26.163 [2024-12-10 11:32:53.167395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:26.163 [2024-12-10 11:32:53.167404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:26.163 [2024-12-10 11:32:53.167414] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:26.163 [2024-12-10 11:32:53.167424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:26.163 [2024-12-10 11:32:53.167435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:26.163 [2024-12-10 11:32:53.167444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:26.163 [2024-12-10 11:32:53.167454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:26.163 [2024-12-10 11:32:53.167464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:26.163 [2024-12-10 11:32:53.167474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.163 [2024-12-10 11:32:53.167484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:26.163 [2024-12-10 11:32:53.167494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:25:26.163 [2024-12-10 11:32:53.167503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.163 [2024-12-10 11:32:53.206007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.163 [2024-12-10 11:32:53.206043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:26.163 [2024-12-10 11:32:53.206056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.523 ms 00:25:26.163 [2024-12-10 11:32:53.206086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.163 [2024-12-10 11:32:53.206159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.163 [2024-12-10 11:32:53.206170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:26.163 [2024-12-10 11:32:53.206181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:26.163 [2024-12-10 11:32:53.206190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.423 [2024-12-10 11:32:53.277256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.423 [2024-12-10 11:32:53.277291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:26.423 [2024-12-10 11:32:53.277305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.126 ms 00:25:26.423 [2024-12-10 11:32:53.277315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.423 [2024-12-10 11:32:53.277353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.423 [2024-12-10 11:32:53.277364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:26.423 [2024-12-10 11:32:53.277378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:26.423 [2024-12-10 11:32:53.277388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.423 [2024-12-10 11:32:53.277906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.423 [2024-12-10 11:32:53.277941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:26.423 [2024-12-10 11:32:53.277953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:25:26.423 [2024-12-10 11:32:53.277962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.423 [2024-12-10 11:32:53.278076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.423 [2024-12-10 11:32:53.278090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:26.423 [2024-12-10 11:32:53.278104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:26.423 [2024-12-10 11:32:53.278115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.423 [2024-12-10 11:32:53.296075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.423 [2024-12-10 11:32:53.296112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:26.423 [2024-12-10 11:32:53.296125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.970 ms 00:25:26.424 [2024-12-10 11:32:53.296135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.424 [2024-12-10 11:32:53.314547] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:26.424 [2024-12-10 11:32:53.314591] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:26.424 [2024-12-10 11:32:53.314605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.424 [2024-12-10 11:32:53.314615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:26.424 [2024-12-10 11:32:53.314626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.402 ms 00:25:26.424 [2024-12-10 11:32:53.314635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.424 [2024-12-10 11:32:53.342387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.424 [2024-12-10 11:32:53.342431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:26.424 [2024-12-10 11:32:53.342445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.753 ms 00:25:26.424 [2024-12-10 11:32:53.342455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.424 [2024-12-10 11:32:53.359681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.424 [2024-12-10 11:32:53.359718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:26.424 [2024-12-10 11:32:53.359731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.227 ms 00:25:26.424 [2024-12-10 11:32:53.359756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.424 [2024-12-10 11:32:53.376438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.424 [2024-12-10 11:32:53.376476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:26.424 [2024-12-10 11:32:53.376489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.668 ms 00:25:26.424 [2024-12-10 11:32:53.376498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.424 [2024-12-10 11:32:53.377192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.424 [2024-12-10 11:32:53.377223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:26.424 [2024-12-10 11:32:53.377235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:25:26.424 [2024-12-10 11:32:53.377249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.424 [2024-12-10 11:32:53.457416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.424 [2024-12-10 11:32:53.457481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:26.424 [2024-12-10 11:32:53.457498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.272 ms 00:25:26.424 [2024-12-10 11:32:53.457513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.424 [2024-12-10 11:32:53.467486] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:26.424 [2024-12-10 11:32:53.469697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.424 [2024-12-10 11:32:53.469727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:26.424 [2024-12-10 11:32:53.469740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.158 ms 00:25:26.424 [2024-12-10 11:32:53.469749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.424 [2024-12-10 11:32:53.469842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.424 [2024-12-10 11:32:53.469859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:26.424 [2024-12-10 11:32:53.469869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:26.424 [2024-12-10 11:32:53.469879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.424 [2024-12-10 11:32:53.469986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.424 [2024-12-10 11:32:53.470001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:26.424 [2024-12-10 11:32:53.470012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:26.424 [2024-12-10 11:32:53.470037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.424 [2024-12-10 11:32:53.470063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.424 [2024-12-10 11:32:53.470074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:26.424 [2024-12-10 11:32:53.470084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:26.424 [2024-12-10 11:32:53.470093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.424 [2024-12-10 11:32:53.470134] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:26.424 [2024-12-10 11:32:53.470150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.424 [2024-12-10 11:32:53.470161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:26.424 [2024-12-10 11:32:53.470171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:26.424 [2024-12-10 11:32:53.470181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.424 [2024-12-10 11:32:53.504196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.424 [2024-12-10 11:32:53.504233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:26.424 [2024-12-10 11:32:53.504246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.051 ms 00:25:26.424 [2024-12-10 11:32:53.504262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.424 [2024-12-10 11:32:53.504341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.424 [2024-12-10 11:32:53.504357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:26.424 [2024-12-10 11:32:53.504368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:26.424 [2024-12-10 11:32:53.504378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.424 [2024-12-10 11:32:53.505490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 372.331 ms, result 0 00:25:27.803  [2024-12-10T11:32:55.855Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-10T11:32:56.791Z] Copying: 46/1024 [MB] (23 MBps) [2024-12-10T11:32:57.729Z] Copying: 69/1024 [MB] (23 MBps) [2024-12-10T11:32:58.666Z] Copying: 92/1024 [MB] (23 MBps) [2024-12-10T11:32:59.605Z] Copying: 115/1024 [MB] (22 MBps) [2024-12-10T11:33:00.543Z] Copying: 139/1024 [MB] (23 MBps) [2024-12-10T11:33:01.920Z] Copying: 163/1024 [MB] (23 MBps) [2024-12-10T11:33:02.856Z] Copying: 187/1024 [MB] (23 MBps) [2024-12-10T11:33:03.794Z] Copying: 210/1024 [MB] (23 MBps) [2024-12-10T11:33:04.732Z] Copying: 234/1024 [MB] (23 MBps) [2024-12-10T11:33:05.669Z] Copying: 256/1024 [MB] (22 MBps) [2024-12-10T11:33:06.603Z] Copying: 279/1024 [MB] (22 MBps) [2024-12-10T11:33:07.540Z] Copying: 301/1024 [MB] (22 MBps) [2024-12-10T11:33:08.919Z] Copying: 323/1024 [MB] (22 MBps) [2024-12-10T11:33:09.541Z] Copying: 347/1024 [MB] (23 MBps) [2024-12-10T11:33:10.920Z] Copying: 370/1024 [MB] (22 MBps) [2024-12-10T11:33:11.489Z] Copying: 392/1024 [MB] (22 MBps) [2024-12-10T11:33:12.867Z] Copying: 415/1024 [MB] (22 MBps) [2024-12-10T11:33:13.806Z] Copying: 439/1024 [MB] (24 MBps) [2024-12-10T11:33:14.744Z] Copying: 463/1024 [MB] (23 MBps) [2024-12-10T11:33:15.684Z] Copying: 486/1024 [MB] (23 MBps) [2024-12-10T11:33:16.621Z] Copying: 510/1024 [MB] (23 MBps) [2024-12-10T11:33:17.558Z] Copying: 533/1024 [MB] (23 MBps) [2024-12-10T11:33:18.496Z] Copying: 557/1024 [MB] (24 MBps) [2024-12-10T11:33:19.875Z] Copying: 581/1024 [MB] (24 MBps) [2024-12-10T11:33:20.812Z] Copying: 605/1024 [MB] (23 MBps) [2024-12-10T11:33:21.749Z] Copying: 629/1024 [MB] (23 MBps) [2024-12-10T11:33:22.687Z] Copying: 653/1024 [MB] (23 MBps) [2024-12-10T11:33:23.624Z] Copying: 676/1024 [MB] (23 MBps) [2024-12-10T11:33:24.562Z] Copying: 701/1024 [MB] (24 MBps) [2024-12-10T11:33:25.499Z] Copying: 724/1024 [MB] (23 MBps) [2024-12-10T11:33:26.877Z] Copying: 746/1024 [MB] (21 MBps) [2024-12-10T11:33:27.814Z] Copying: 769/1024 [MB] (23 MBps) [2024-12-10T11:33:28.752Z] Copying: 792/1024 [MB] (23 MBps) [2024-12-10T11:33:29.690Z] Copying: 815/1024 [MB] (22 MBps) [2024-12-10T11:33:30.627Z] Copying: 838/1024 [MB] (23 MBps) [2024-12-10T11:33:31.566Z] Copying: 862/1024 [MB] (23 MBps) [2024-12-10T11:33:32.503Z] Copying: 885/1024 [MB] (23 MBps) [2024-12-10T11:33:33.883Z] Copying: 908/1024 [MB] (22 MBps) [2024-12-10T11:33:34.820Z] Copying: 931/1024 [MB] (23 MBps) [2024-12-10T11:33:35.811Z] Copying: 954/1024 [MB] (22 MBps) [2024-12-10T11:33:36.749Z] Copying: 976/1024 [MB] (22 MBps) [2024-12-10T11:33:37.687Z] Copying: 998/1024 [MB] (22 MBps) [2024-12-10T11:33:37.687Z] Copying: 1022/1024 [MB] (23 MBps) [2024-12-10T11:33:37.687Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-10 11:33:37.519234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.573 [2024-12-10 11:33:37.519280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:10.573 [2024-12-10 11:33:37.519296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:10.573 [2024-12-10 11:33:37.519307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.573 [2024-12-10 11:33:37.519329] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:10.573 [2024-12-10 11:33:37.523427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.573 [2024-12-10 11:33:37.523469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:10.573 [2024-12-10 11:33:37.523504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.088 ms 00:26:10.573 [2024-12-10 11:33:37.523514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.573 [2024-12-10 11:33:37.525485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.573 [2024-12-10 11:33:37.525524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:10.573 [2024-12-10 11:33:37.525537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.949 ms 00:26:10.573 [2024-12-10 11:33:37.525547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.573 [2024-12-10 11:33:37.542891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.573 [2024-12-10 11:33:37.542934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:10.573 [2024-12-10 11:33:37.542947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.355 ms 00:26:10.573 [2024-12-10 11:33:37.542956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.573 [2024-12-10 11:33:37.547736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.573 [2024-12-10 11:33:37.547766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:10.573 [2024-12-10 11:33:37.547778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.733 ms 00:26:10.573 [2024-12-10 11:33:37.547787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.573 [2024-12-10 11:33:37.582585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.573 [2024-12-10 11:33:37.582622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:10.573 [2024-12-10 11:33:37.582634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.795 ms 00:26:10.573 [2024-12-10 11:33:37.582643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.573 [2024-12-10 11:33:37.602525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.573 [2024-12-10 11:33:37.602562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:10.573 [2024-12-10 11:33:37.602576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.863 ms 00:26:10.573 [2024-12-10 11:33:37.602585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.573 [2024-12-10 11:33:37.602736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.573 [2024-12-10 11:33:37.602755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:10.573 [2024-12-10 11:33:37.602765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:26:10.573 [2024-12-10 11:33:37.602774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.573 [2024-12-10 11:33:37.637515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.573 [2024-12-10 11:33:37.637548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:10.573 [2024-12-10 11:33:37.637559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.782 ms 00:26:10.573 [2024-12-10 11:33:37.637568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.573 [2024-12-10 11:33:37.671040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.573 [2024-12-10 11:33:37.671075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:10.573 [2024-12-10 11:33:37.671086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.474 ms 00:26:10.573 [2024-12-10 11:33:37.671095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.834 [2024-12-10 11:33:37.704360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.834 [2024-12-10 11:33:37.704395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:10.834 [2024-12-10 11:33:37.704407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.268 ms 00:26:10.834 [2024-12-10 11:33:37.704416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.834 [2024-12-10 11:33:37.737692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.834 [2024-12-10 11:33:37.737730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:10.834 [2024-12-10 11:33:37.737742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.244 ms 00:26:10.834 [2024-12-10 11:33:37.737751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.834 [2024-12-10 11:33:37.737802] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:10.834 [2024-12-10 11:33:37.737817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.737997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.738007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.738017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.738027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.738037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.738046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.738056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.738065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:10.834 [2024-12-10 11:33:37.738075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:10.835 [2024-12-10 11:33:37.738862] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:10.835 [2024-12-10 11:33:37.738876] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d3773636-b766-4115-80d1-23bd7ec89892 00:26:10.835 [2024-12-10 11:33:37.738886] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:10.835 [2024-12-10 11:33:37.738895] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:10.835 [2024-12-10 11:33:37.738905] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:10.835 [2024-12-10 11:33:37.738915] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:10.835 [2024-12-10 11:33:37.738924] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:10.835 [2024-12-10 11:33:37.738950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:10.835 [2024-12-10 11:33:37.738960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:10.835 [2024-12-10 11:33:37.738969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:10.835 [2024-12-10 11:33:37.738977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:10.835 [2024-12-10 11:33:37.738986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.835 [2024-12-10 11:33:37.738996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:10.835 [2024-12-10 11:33:37.739006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:26:10.835 [2024-12-10 11:33:37.739016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.835 [2024-12-10 11:33:37.757933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.835 [2024-12-10 11:33:37.757961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:10.835 [2024-12-10 11:33:37.757972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.911 ms 00:26:10.835 [2024-12-10 11:33:37.757982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.835 [2024-12-10 11:33:37.758602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.836 [2024-12-10 11:33:37.758623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:10.836 [2024-12-10 11:33:37.758633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:26:10.836 [2024-12-10 11:33:37.758649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-12-10 11:33:37.807437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.836 [2024-12-10 11:33:37.807470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:10.836 [2024-12-10 11:33:37.807497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.836 [2024-12-10 11:33:37.807508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-12-10 11:33:37.807557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.836 [2024-12-10 11:33:37.807567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:10.836 [2024-12-10 11:33:37.807577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.836 [2024-12-10 11:33:37.807591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-12-10 11:33:37.807647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.836 [2024-12-10 11:33:37.807660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:10.836 [2024-12-10 11:33:37.807670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.836 [2024-12-10 11:33:37.807679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-12-10 11:33:37.807695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.836 [2024-12-10 11:33:37.807705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:10.836 [2024-12-10 11:33:37.807715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.836 [2024-12-10 11:33:37.807724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-12-10 11:33:37.923369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.836 [2024-12-10 11:33:37.923420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:10.836 [2024-12-10 11:33:37.923432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.836 [2024-12-10 11:33:37.923443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-12-10 11:33:38.018007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-12-10 11:33:38.018055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:11.096 [2024-12-10 11:33:38.018068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-12-10 11:33:38.018084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-12-10 11:33:38.018182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-12-10 11:33:38.018194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:11.096 [2024-12-10 11:33:38.018204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-12-10 11:33:38.018214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-12-10 11:33:38.018250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-12-10 11:33:38.018262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:11.096 [2024-12-10 11:33:38.018272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-12-10 11:33:38.018281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-12-10 11:33:38.018395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-12-10 11:33:38.018408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:11.096 [2024-12-10 11:33:38.018418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-12-10 11:33:38.018428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-12-10 11:33:38.018478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-12-10 11:33:38.018490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:11.096 [2024-12-10 11:33:38.018500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-12-10 11:33:38.018510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-12-10 11:33:38.018547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-12-10 11:33:38.018561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:11.096 [2024-12-10 11:33:38.018571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-12-10 11:33:38.018581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-12-10 11:33:38.018621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-12-10 11:33:38.018632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:11.096 [2024-12-10 11:33:38.018642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-12-10 11:33:38.018651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-12-10 11:33:38.018769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 500.317 ms, result 0 00:26:12.479 00:26:12.479 00:26:12.479 11:33:39 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:26:12.479 [2024-12-10 11:33:39.291265] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:26:12.479 [2024-12-10 11:33:39.291392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79887 ] 00:26:12.479 [2024-12-10 11:33:39.470717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.479 [2024-12-10 11:33:39.580308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.049 [2024-12-10 11:33:39.922881] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:13.049 [2024-12-10 11:33:39.922975] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:13.049 [2024-12-10 11:33:40.082588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.049 [2024-12-10 11:33:40.082642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:13.049 [2024-12-10 11:33:40.082656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:13.049 [2024-12-10 11:33:40.082666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.049 [2024-12-10 11:33:40.082728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.049 [2024-12-10 11:33:40.082743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:13.049 [2024-12-10 11:33:40.082754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:13.049 [2024-12-10 11:33:40.082764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.049 [2024-12-10 11:33:40.082785] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:13.049 [2024-12-10 11:33:40.083709] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:13.049 [2024-12-10 11:33:40.083739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.049 [2024-12-10 11:33:40.083751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:13.049 [2024-12-10 11:33:40.083762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:26:13.049 [2024-12-10 11:33:40.083772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.049 [2024-12-10 11:33:40.085329] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:13.049 [2024-12-10 11:33:40.103635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.049 [2024-12-10 11:33:40.103676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:13.049 [2024-12-10 11:33:40.103690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.337 ms 00:26:13.049 [2024-12-10 11:33:40.103700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.049 [2024-12-10 11:33:40.103783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.049 [2024-12-10 11:33:40.103795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:13.049 [2024-12-10 11:33:40.103806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:13.049 [2024-12-10 11:33:40.103816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.049 [2024-12-10 11:33:40.110642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.049 [2024-12-10 11:33:40.110672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:13.049 [2024-12-10 11:33:40.110684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.767 ms 00:26:13.049 [2024-12-10 11:33:40.110696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.049 [2024-12-10 11:33:40.110786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.049 [2024-12-10 11:33:40.110799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:13.049 [2024-12-10 11:33:40.110810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:13.049 [2024-12-10 11:33:40.110820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.049 [2024-12-10 11:33:40.110859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.049 [2024-12-10 11:33:40.110870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:13.049 [2024-12-10 11:33:40.110880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:13.049 [2024-12-10 11:33:40.110890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.049 [2024-12-10 11:33:40.110917] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:13.049 [2024-12-10 11:33:40.115553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.049 [2024-12-10 11:33:40.115585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:13.049 [2024-12-10 11:33:40.115600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.649 ms 00:26:13.049 [2024-12-10 11:33:40.115625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.049 [2024-12-10 11:33:40.115658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.049 [2024-12-10 11:33:40.115669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:13.049 [2024-12-10 11:33:40.115679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:13.049 [2024-12-10 11:33:40.115689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.049 [2024-12-10 11:33:40.115741] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:13.049 [2024-12-10 11:33:40.115766] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:13.049 [2024-12-10 11:33:40.115799] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:13.049 [2024-12-10 11:33:40.115819] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:13.049 [2024-12-10 11:33:40.115921] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:13.049 [2024-12-10 11:33:40.115934] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:13.049 [2024-12-10 11:33:40.115957] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:13.049 [2024-12-10 11:33:40.115970] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:13.049 [2024-12-10 11:33:40.115982] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:13.050 [2024-12-10 11:33:40.115993] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:13.050 [2024-12-10 11:33:40.116004] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:13.050 [2024-12-10 11:33:40.116018] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:13.050 [2024-12-10 11:33:40.116028] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:13.050 [2024-12-10 11:33:40.116038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.050 [2024-12-10 11:33:40.116048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:13.050 [2024-12-10 11:33:40.116058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:26:13.050 [2024-12-10 11:33:40.116069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.050 [2024-12-10 11:33:40.116139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.050 [2024-12-10 11:33:40.116150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:13.050 [2024-12-10 11:33:40.116160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:13.050 [2024-12-10 11:33:40.116171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.050 [2024-12-10 11:33:40.116260] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:13.050 [2024-12-10 11:33:40.116282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:13.050 [2024-12-10 11:33:40.116293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:13.050 [2024-12-10 11:33:40.116304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.050 [2024-12-10 11:33:40.116314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:13.050 [2024-12-10 11:33:40.116324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:13.050 [2024-12-10 11:33:40.116333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:13.050 [2024-12-10 11:33:40.116342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:13.050 [2024-12-10 11:33:40.116352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:13.050 [2024-12-10 11:33:40.116361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:13.050 [2024-12-10 11:33:40.116371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:13.050 [2024-12-10 11:33:40.116380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:13.050 [2024-12-10 11:33:40.116389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:13.050 [2024-12-10 11:33:40.116410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:13.050 [2024-12-10 11:33:40.116419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:13.050 [2024-12-10 11:33:40.116428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.050 [2024-12-10 11:33:40.116437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:13.050 [2024-12-10 11:33:40.116446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:13.050 [2024-12-10 11:33:40.116455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.050 [2024-12-10 11:33:40.116464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:13.050 [2024-12-10 11:33:40.116473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:13.050 [2024-12-10 11:33:40.116482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.050 [2024-12-10 11:33:40.116491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:13.050 [2024-12-10 11:33:40.116501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:13.050 [2024-12-10 11:33:40.116510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.050 [2024-12-10 11:33:40.116519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:13.050 [2024-12-10 11:33:40.116528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:13.050 [2024-12-10 11:33:40.116537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.050 [2024-12-10 11:33:40.116546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:13.050 [2024-12-10 11:33:40.116554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:13.050 [2024-12-10 11:33:40.116563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.050 [2024-12-10 11:33:40.116572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:13.050 [2024-12-10 11:33:40.116581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:13.050 [2024-12-10 11:33:40.116590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:13.050 [2024-12-10 11:33:40.116599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:13.050 [2024-12-10 11:33:40.116608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:13.050 [2024-12-10 11:33:40.116616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:13.050 [2024-12-10 11:33:40.116625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:13.050 [2024-12-10 11:33:40.116634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:13.050 [2024-12-10 11:33:40.116643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.050 [2024-12-10 11:33:40.116652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:13.050 [2024-12-10 11:33:40.116662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:13.050 [2024-12-10 11:33:40.116671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.050 [2024-12-10 11:33:40.116680] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:13.050 [2024-12-10 11:33:40.116690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:13.050 [2024-12-10 11:33:40.116700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:13.050 [2024-12-10 11:33:40.116709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.050 [2024-12-10 11:33:40.116719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:13.050 [2024-12-10 11:33:40.116728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:13.050 [2024-12-10 11:33:40.116737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:13.050 [2024-12-10 11:33:40.116746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:13.050 [2024-12-10 11:33:40.116755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:13.050 [2024-12-10 11:33:40.116764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:13.050 [2024-12-10 11:33:40.116774] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:13.050 [2024-12-10 11:33:40.116786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:13.050 [2024-12-10 11:33:40.116801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:13.050 [2024-12-10 11:33:40.116812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:13.050 [2024-12-10 11:33:40.116822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:13.050 [2024-12-10 11:33:40.116832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:13.050 [2024-12-10 11:33:40.116843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:13.050 [2024-12-10 11:33:40.116853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:13.050 [2024-12-10 11:33:40.116864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:13.050 [2024-12-10 11:33:40.116874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:13.050 [2024-12-10 11:33:40.116884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:13.050 [2024-12-10 11:33:40.116894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:13.050 [2024-12-10 11:33:40.116904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:13.050 [2024-12-10 11:33:40.116928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:13.050 [2024-12-10 11:33:40.116939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:13.050 [2024-12-10 11:33:40.116950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:13.050 [2024-12-10 11:33:40.116960] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:13.050 [2024-12-10 11:33:40.116972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:13.050 [2024-12-10 11:33:40.116983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:13.050 [2024-12-10 11:33:40.116994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:13.050 [2024-12-10 11:33:40.117007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:13.050 [2024-12-10 11:33:40.117018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:13.050 [2024-12-10 11:33:40.117029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.050 [2024-12-10 11:33:40.117039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:13.050 [2024-12-10 11:33:40.117049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:26:13.050 [2024-12-10 11:33:40.117059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.050 [2024-12-10 11:33:40.156189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.050 [2024-12-10 11:33:40.156230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:13.050 [2024-12-10 11:33:40.156243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.148 ms 00:26:13.050 [2024-12-10 11:33:40.156257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.050 [2024-12-10 11:33:40.156348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.050 [2024-12-10 11:33:40.156359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:13.050 [2024-12-10 11:33:40.156369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:13.050 [2024-12-10 11:33:40.156379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.310 [2024-12-10 11:33:40.224133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.310 [2024-12-10 11:33:40.224174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:13.310 [2024-12-10 11:33:40.224187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.806 ms 00:26:13.310 [2024-12-10 11:33:40.224198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.310 [2024-12-10 11:33:40.224255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.310 [2024-12-10 11:33:40.224267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:13.310 [2024-12-10 11:33:40.224281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:13.310 [2024-12-10 11:33:40.224291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.310 [2024-12-10 11:33:40.224799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.310 [2024-12-10 11:33:40.224822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:13.310 [2024-12-10 11:33:40.224834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:26:13.310 [2024-12-10 11:33:40.224844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.310 [2024-12-10 11:33:40.224971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.310 [2024-12-10 11:33:40.224987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:13.310 [2024-12-10 11:33:40.225001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:26:13.310 [2024-12-10 11:33:40.225011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.310 [2024-12-10 11:33:40.242613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.310 [2024-12-10 11:33:40.242652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:13.310 [2024-12-10 11:33:40.242681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.608 ms 00:26:13.310 [2024-12-10 11:33:40.242692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.310 [2024-12-10 11:33:40.262030] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:13.310 [2024-12-10 11:33:40.262071] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:13.310 [2024-12-10 11:33:40.262086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.310 [2024-12-10 11:33:40.262097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:13.310 [2024-12-10 11:33:40.262110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.321 ms 00:26:13.310 [2024-12-10 11:33:40.262120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.310 [2024-12-10 11:33:40.291069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.310 [2024-12-10 11:33:40.291106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:13.310 [2024-12-10 11:33:40.291121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.952 ms 00:26:13.310 [2024-12-10 11:33:40.291132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.310 [2024-12-10 11:33:40.308495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.310 [2024-12-10 11:33:40.308531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:13.310 [2024-12-10 11:33:40.308559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.320 ms 00:26:13.310 [2024-12-10 11:33:40.308569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.310 [2024-12-10 11:33:40.325529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.310 [2024-12-10 11:33:40.325566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:13.311 [2024-12-10 11:33:40.325594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.951 ms 00:26:13.311 [2024-12-10 11:33:40.325604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.311 [2024-12-10 11:33:40.326366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.311 [2024-12-10 11:33:40.326398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:13.311 [2024-12-10 11:33:40.326414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:26:13.311 [2024-12-10 11:33:40.326424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.311 [2024-12-10 11:33:40.406884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.311 [2024-12-10 11:33:40.406951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:13.311 [2024-12-10 11:33:40.406972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.566 ms 00:26:13.311 [2024-12-10 11:33:40.406982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.311 [2024-12-10 11:33:40.416996] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:13.311 [2024-12-10 11:33:40.419389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.311 [2024-12-10 11:33:40.419420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:13.311 [2024-12-10 11:33:40.419433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.366 ms 00:26:13.311 [2024-12-10 11:33:40.419442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.311 [2024-12-10 11:33:40.419532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.311 [2024-12-10 11:33:40.419545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:13.311 [2024-12-10 11:33:40.419560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:13.311 [2024-12-10 11:33:40.419570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.311 [2024-12-10 11:33:40.419642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.311 [2024-12-10 11:33:40.419655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:13.311 [2024-12-10 11:33:40.419665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:13.311 [2024-12-10 11:33:40.419675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.311 [2024-12-10 11:33:40.419697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.311 [2024-12-10 11:33:40.419708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:13.311 [2024-12-10 11:33:40.419718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:13.311 [2024-12-10 11:33:40.419728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.311 [2024-12-10 11:33:40.419766] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:13.311 [2024-12-10 11:33:40.419794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.311 [2024-12-10 11:33:40.419804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:13.311 [2024-12-10 11:33:40.419814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:13.311 [2024-12-10 11:33:40.419824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.570 [2024-12-10 11:33:40.454419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.570 [2024-12-10 11:33:40.454460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:13.570 [2024-12-10 11:33:40.454495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.631 ms 00:26:13.570 [2024-12-10 11:33:40.454506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.570 [2024-12-10 11:33:40.454576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.570 [2024-12-10 11:33:40.454589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:13.570 [2024-12-10 11:33:40.454600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:13.570 [2024-12-10 11:33:40.454609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.570 [2024-12-10 11:33:40.455782] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 373.376 ms, result 0 00:26:14.949  [2024-12-10T11:33:43.001Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-10T11:33:43.940Z] Copying: 48/1024 [MB] (24 MBps) [2024-12-10T11:33:44.878Z] Copying: 73/1024 [MB] (25 MBps) [2024-12-10T11:33:45.816Z] Copying: 98/1024 [MB] (24 MBps) [2024-12-10T11:33:46.753Z] Copying: 123/1024 [MB] (25 MBps) [2024-12-10T11:33:47.692Z] Copying: 148/1024 [MB] (25 MBps) [2024-12-10T11:33:49.071Z] Copying: 172/1024 [MB] (23 MBps) [2024-12-10T11:33:50.010Z] Copying: 196/1024 [MB] (23 MBps) [2024-12-10T11:33:50.957Z] Copying: 220/1024 [MB] (24 MBps) [2024-12-10T11:33:51.895Z] Copying: 245/1024 [MB] (24 MBps) [2024-12-10T11:33:52.833Z] Copying: 270/1024 [MB] (24 MBps) [2024-12-10T11:33:53.771Z] Copying: 295/1024 [MB] (24 MBps) [2024-12-10T11:33:54.708Z] Copying: 320/1024 [MB] (25 MBps) [2024-12-10T11:33:55.646Z] Copying: 345/1024 [MB] (24 MBps) [2024-12-10T11:33:57.023Z] Copying: 370/1024 [MB] (24 MBps) [2024-12-10T11:33:57.961Z] Copying: 394/1024 [MB] (24 MBps) [2024-12-10T11:33:58.899Z] Copying: 419/1024 [MB] (24 MBps) [2024-12-10T11:33:59.837Z] Copying: 443/1024 [MB] (24 MBps) [2024-12-10T11:34:00.813Z] Copying: 468/1024 [MB] (24 MBps) [2024-12-10T11:34:01.752Z] Copying: 491/1024 [MB] (23 MBps) [2024-12-10T11:34:02.691Z] Copying: 516/1024 [MB] (24 MBps) [2024-12-10T11:34:03.631Z] Copying: 540/1024 [MB] (24 MBps) [2024-12-10T11:34:05.011Z] Copying: 564/1024 [MB] (23 MBps) [2024-12-10T11:34:05.949Z] Copying: 588/1024 [MB] (24 MBps) [2024-12-10T11:34:06.887Z] Copying: 612/1024 [MB] (23 MBps) [2024-12-10T11:34:07.826Z] Copying: 635/1024 [MB] (22 MBps) [2024-12-10T11:34:08.765Z] Copying: 658/1024 [MB] (23 MBps) [2024-12-10T11:34:09.704Z] Copying: 682/1024 [MB] (23 MBps) [2024-12-10T11:34:10.643Z] Copying: 706/1024 [MB] (24 MBps) [2024-12-10T11:34:12.023Z] Copying: 731/1024 [MB] (24 MBps) [2024-12-10T11:34:12.962Z] Copying: 755/1024 [MB] (24 MBps) [2024-12-10T11:34:13.901Z] Copying: 780/1024 [MB] (24 MBps) [2024-12-10T11:34:14.841Z] Copying: 805/1024 [MB] (24 MBps) [2024-12-10T11:34:15.779Z] Copying: 830/1024 [MB] (25 MBps) [2024-12-10T11:34:16.718Z] Copying: 856/1024 [MB] (25 MBps) [2024-12-10T11:34:17.657Z] Copying: 881/1024 [MB] (25 MBps) [2024-12-10T11:34:19.036Z] Copying: 906/1024 [MB] (25 MBps) [2024-12-10T11:34:19.604Z] Copying: 931/1024 [MB] (24 MBps) [2024-12-10T11:34:20.984Z] Copying: 955/1024 [MB] (24 MBps) [2024-12-10T11:34:21.923Z] Copying: 979/1024 [MB] (24 MBps) [2024-12-10T11:34:22.492Z] Copying: 1004/1024 [MB] (24 MBps) [2024-12-10T11:34:23.874Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-10 11:34:23.739813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.760 [2024-12-10 11:34:23.739874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:56.760 [2024-12-10 11:34:23.739891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:56.760 [2024-12-10 11:34:23.739902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.760 [2024-12-10 11:34:23.739940] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:56.760 [2024-12-10 11:34:23.744288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.760 [2024-12-10 11:34:23.744332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:56.760 [2024-12-10 11:34:23.744360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.334 ms 00:26:56.760 [2024-12-10 11:34:23.744371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.760 [2024-12-10 11:34:23.744576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.760 [2024-12-10 11:34:23.744589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:56.760 [2024-12-10 11:34:23.744600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:26:56.760 [2024-12-10 11:34:23.744610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.760 [2024-12-10 11:34:23.747628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.760 [2024-12-10 11:34:23.747663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:56.760 [2024-12-10 11:34:23.747677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:26:56.760 [2024-12-10 11:34:23.747694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.760 [2024-12-10 11:34:23.752850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.760 [2024-12-10 11:34:23.752883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:56.760 [2024-12-10 11:34:23.752896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.137 ms 00:26:56.760 [2024-12-10 11:34:23.752907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.760 [2024-12-10 11:34:23.790472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.760 [2024-12-10 11:34:23.790513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:56.760 [2024-12-10 11:34:23.790527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.558 ms 00:26:56.760 [2024-12-10 11:34:23.790538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.760 [2024-12-10 11:34:23.813252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.760 [2024-12-10 11:34:23.813294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:56.760 [2024-12-10 11:34:23.813309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.707 ms 00:26:56.760 [2024-12-10 11:34:23.813320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.760 [2024-12-10 11:34:23.813468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.760 [2024-12-10 11:34:23.813483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:56.760 [2024-12-10 11:34:23.813495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:26:56.760 [2024-12-10 11:34:23.813505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.760 [2024-12-10 11:34:23.849245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.760 [2024-12-10 11:34:23.849291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:56.760 [2024-12-10 11:34:23.849320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.781 ms 00:26:56.760 [2024-12-10 11:34:23.849329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.021 [2024-12-10 11:34:23.883423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.021 [2024-12-10 11:34:23.883462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:57.021 [2024-12-10 11:34:23.883475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.110 ms 00:26:57.021 [2024-12-10 11:34:23.883484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.021 [2024-12-10 11:34:23.917263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.021 [2024-12-10 11:34:23.917299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:57.021 [2024-12-10 11:34:23.917312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.779 ms 00:26:57.021 [2024-12-10 11:34:23.917321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.021 [2024-12-10 11:34:23.950515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.021 [2024-12-10 11:34:23.950551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:57.021 [2024-12-10 11:34:23.950580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.156 ms 00:26:57.021 [2024-12-10 11:34:23.950590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.021 [2024-12-10 11:34:23.950626] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:57.021 [2024-12-10 11:34:23.950647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:57.021 [2024-12-10 11:34:23.950663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.950996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:57.022 [2024-12-10 11:34:23.951381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:57.023 [2024-12-10 11:34:23.951719] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:57.023 [2024-12-10 11:34:23.951729] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d3773636-b766-4115-80d1-23bd7ec89892 00:26:57.023 [2024-12-10 11:34:23.951740] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:57.023 [2024-12-10 11:34:23.951750] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:57.023 [2024-12-10 11:34:23.951760] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:57.023 [2024-12-10 11:34:23.951770] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:57.023 [2024-12-10 11:34:23.951790] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:57.023 [2024-12-10 11:34:23.951801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:57.023 [2024-12-10 11:34:23.951810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:57.023 [2024-12-10 11:34:23.951820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:57.023 [2024-12-10 11:34:23.951828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:57.023 [2024-12-10 11:34:23.951838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.023 [2024-12-10 11:34:23.951848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:57.023 [2024-12-10 11:34:23.951858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:26:57.023 [2024-12-10 11:34:23.951872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.023 [2024-12-10 11:34:23.971354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.023 [2024-12-10 11:34:23.971387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:57.023 [2024-12-10 11:34:23.971399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.453 ms 00:26:57.023 [2024-12-10 11:34:23.971408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.023 [2024-12-10 11:34:23.971985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.023 [2024-12-10 11:34:23.972003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:57.023 [2024-12-10 11:34:23.972020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:26:57.023 [2024-12-10 11:34:23.972031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.023 [2024-12-10 11:34:24.020065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.023 [2024-12-10 11:34:24.020101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:57.023 [2024-12-10 11:34:24.020130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.023 [2024-12-10 11:34:24.020141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.023 [2024-12-10 11:34:24.020191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.023 [2024-12-10 11:34:24.020202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:57.023 [2024-12-10 11:34:24.020218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.023 [2024-12-10 11:34:24.020228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.023 [2024-12-10 11:34:24.020290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.023 [2024-12-10 11:34:24.020303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:57.023 [2024-12-10 11:34:24.020313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.023 [2024-12-10 11:34:24.020323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.023 [2024-12-10 11:34:24.020339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.023 [2024-12-10 11:34:24.020349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:57.023 [2024-12-10 11:34:24.020359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.023 [2024-12-10 11:34:24.020372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.283 [2024-12-10 11:34:24.134705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.283 [2024-12-10 11:34:24.134763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:57.283 [2024-12-10 11:34:24.134776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.283 [2024-12-10 11:34:24.134786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.283 [2024-12-10 11:34:24.228466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.283 [2024-12-10 11:34:24.228514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:57.283 [2024-12-10 11:34:24.228533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.283 [2024-12-10 11:34:24.228559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.283 [2024-12-10 11:34:24.228636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.283 [2024-12-10 11:34:24.228648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:57.283 [2024-12-10 11:34:24.228659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.283 [2024-12-10 11:34:24.228669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.283 [2024-12-10 11:34:24.228704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.283 [2024-12-10 11:34:24.228716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:57.283 [2024-12-10 11:34:24.228726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.283 [2024-12-10 11:34:24.228735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.283 [2024-12-10 11:34:24.228843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.283 [2024-12-10 11:34:24.228856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:57.283 [2024-12-10 11:34:24.228866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.284 [2024-12-10 11:34:24.228876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.284 [2024-12-10 11:34:24.228925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.284 [2024-12-10 11:34:24.228937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:57.284 [2024-12-10 11:34:24.228965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.284 [2024-12-10 11:34:24.228977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.284 [2024-12-10 11:34:24.229018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.284 [2024-12-10 11:34:24.229029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:57.284 [2024-12-10 11:34:24.229039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.284 [2024-12-10 11:34:24.229049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.284 [2024-12-10 11:34:24.229091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.284 [2024-12-10 11:34:24.229103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:57.284 [2024-12-10 11:34:24.229114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.284 [2024-12-10 11:34:24.229123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.284 [2024-12-10 11:34:24.229243] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 490.201 ms, result 0 00:26:58.221 00:26:58.221 00:26:58.221 11:34:25 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:00.177 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:00.177 11:34:26 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:27:00.177 [2024-12-10 11:34:26.977871] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:27:00.177 [2024-12-10 11:34:26.978027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80369 ] 00:27:00.177 [2024-12-10 11:34:27.164314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.177 [2024-12-10 11:34:27.269619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.746 [2024-12-10 11:34:27.631762] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:00.747 [2024-12-10 11:34:27.631826] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:00.747 [2024-12-10 11:34:27.791147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.747 [2024-12-10 11:34:27.791197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:00.747 [2024-12-10 11:34:27.791213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:00.747 [2024-12-10 11:34:27.791224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.747 [2024-12-10 11:34:27.791270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.747 [2024-12-10 11:34:27.791285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:00.747 [2024-12-10 11:34:27.791295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:00.747 [2024-12-10 11:34:27.791305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.747 [2024-12-10 11:34:27.791326] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:00.747 [2024-12-10 11:34:27.792302] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:00.747 [2024-12-10 11:34:27.792334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.747 [2024-12-10 11:34:27.792345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:00.747 [2024-12-10 11:34:27.792356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:27:00.747 [2024-12-10 11:34:27.792366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.747 [2024-12-10 11:34:27.793803] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:00.747 [2024-12-10 11:34:27.812500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.747 [2024-12-10 11:34:27.812541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:00.747 [2024-12-10 11:34:27.812555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.729 ms 00:27:00.747 [2024-12-10 11:34:27.812564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.747 [2024-12-10 11:34:27.812645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.747 [2024-12-10 11:34:27.812657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:00.747 [2024-12-10 11:34:27.812668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:00.747 [2024-12-10 11:34:27.812679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.747 [2024-12-10 11:34:27.819554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.747 [2024-12-10 11:34:27.819579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:00.747 [2024-12-10 11:34:27.819590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.816 ms 00:27:00.747 [2024-12-10 11:34:27.819603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.747 [2024-12-10 11:34:27.819691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.747 [2024-12-10 11:34:27.819704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:00.747 [2024-12-10 11:34:27.819715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:00.747 [2024-12-10 11:34:27.819724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.747 [2024-12-10 11:34:27.819762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.747 [2024-12-10 11:34:27.819774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:00.747 [2024-12-10 11:34:27.819784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:00.747 [2024-12-10 11:34:27.819793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.747 [2024-12-10 11:34:27.819819] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:00.747 [2024-12-10 11:34:27.824615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.747 [2024-12-10 11:34:27.824644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:00.747 [2024-12-10 11:34:27.824675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.808 ms 00:27:00.747 [2024-12-10 11:34:27.824685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.747 [2024-12-10 11:34:27.824717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.747 [2024-12-10 11:34:27.824727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:00.747 [2024-12-10 11:34:27.824737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:00.747 [2024-12-10 11:34:27.824747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.747 [2024-12-10 11:34:27.824799] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:00.747 [2024-12-10 11:34:27.824844] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:00.747 [2024-12-10 11:34:27.824883] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:00.747 [2024-12-10 11:34:27.824903] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:00.747 [2024-12-10 11:34:27.825019] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:00.747 [2024-12-10 11:34:27.825033] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:00.747 [2024-12-10 11:34:27.825046] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:00.747 [2024-12-10 11:34:27.825060] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:00.747 [2024-12-10 11:34:27.825072] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:00.747 [2024-12-10 11:34:27.825083] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:00.747 [2024-12-10 11:34:27.825093] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:00.747 [2024-12-10 11:34:27.825107] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:00.747 [2024-12-10 11:34:27.825116] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:00.747 [2024-12-10 11:34:27.825127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.747 [2024-12-10 11:34:27.825137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:00.747 [2024-12-10 11:34:27.825147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:27:00.747 [2024-12-10 11:34:27.825157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.747 [2024-12-10 11:34:27.825229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.747 [2024-12-10 11:34:27.825240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:00.747 [2024-12-10 11:34:27.825250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:00.747 [2024-12-10 11:34:27.825260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.747 [2024-12-10 11:34:27.825348] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:00.747 [2024-12-10 11:34:27.825361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:00.747 [2024-12-10 11:34:27.825372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:00.747 [2024-12-10 11:34:27.825382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.747 [2024-12-10 11:34:27.825392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:00.747 [2024-12-10 11:34:27.825402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:00.747 [2024-12-10 11:34:27.825411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:00.747 [2024-12-10 11:34:27.825421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:00.747 [2024-12-10 11:34:27.825430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:00.747 [2024-12-10 11:34:27.825439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:00.747 [2024-12-10 11:34:27.825448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:00.747 [2024-12-10 11:34:27.825457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:00.747 [2024-12-10 11:34:27.825475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:00.747 [2024-12-10 11:34:27.825494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:00.747 [2024-12-10 11:34:27.825504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:00.747 [2024-12-10 11:34:27.825513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.747 [2024-12-10 11:34:27.825522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:00.747 [2024-12-10 11:34:27.825531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:00.747 [2024-12-10 11:34:27.825540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.747 [2024-12-10 11:34:27.825550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:00.747 [2024-12-10 11:34:27.825559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:00.747 [2024-12-10 11:34:27.825568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:00.747 [2024-12-10 11:34:27.825577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:00.747 [2024-12-10 11:34:27.825586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:00.747 [2024-12-10 11:34:27.825594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:00.747 [2024-12-10 11:34:27.825603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:00.747 [2024-12-10 11:34:27.825613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:00.747 [2024-12-10 11:34:27.825621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:00.747 [2024-12-10 11:34:27.825630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:00.747 [2024-12-10 11:34:27.825639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:00.747 [2024-12-10 11:34:27.825648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:00.747 [2024-12-10 11:34:27.825657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:00.747 [2024-12-10 11:34:27.825667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:00.747 [2024-12-10 11:34:27.825675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:00.747 [2024-12-10 11:34:27.825684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:00.747 [2024-12-10 11:34:27.825693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:00.747 [2024-12-10 11:34:27.825702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:00.747 [2024-12-10 11:34:27.825711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:00.747 [2024-12-10 11:34:27.825719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:00.748 [2024-12-10 11:34:27.825729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.748 [2024-12-10 11:34:27.825738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:00.748 [2024-12-10 11:34:27.825747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:00.748 [2024-12-10 11:34:27.825756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.748 [2024-12-10 11:34:27.825765] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:00.748 [2024-12-10 11:34:27.825775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:00.748 [2024-12-10 11:34:27.825784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:00.748 [2024-12-10 11:34:27.825793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.748 [2024-12-10 11:34:27.825803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:00.748 [2024-12-10 11:34:27.825812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:00.748 [2024-12-10 11:34:27.825821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:00.748 [2024-12-10 11:34:27.825830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:00.748 [2024-12-10 11:34:27.825839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:00.748 [2024-12-10 11:34:27.825848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:00.748 [2024-12-10 11:34:27.825858] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:00.748 [2024-12-10 11:34:27.825870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:00.748 [2024-12-10 11:34:27.825885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:00.748 [2024-12-10 11:34:27.825895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:00.748 [2024-12-10 11:34:27.825905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:00.748 [2024-12-10 11:34:27.825927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:00.748 [2024-12-10 11:34:27.825938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:00.748 [2024-12-10 11:34:27.825948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:00.748 [2024-12-10 11:34:27.825958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:00.748 [2024-12-10 11:34:27.825968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:00.748 [2024-12-10 11:34:27.825979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:00.748 [2024-12-10 11:34:27.825989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:00.748 [2024-12-10 11:34:27.825999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:00.748 [2024-12-10 11:34:27.826009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:00.748 [2024-12-10 11:34:27.826019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:00.748 [2024-12-10 11:34:27.826030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:00.748 [2024-12-10 11:34:27.826039] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:00.748 [2024-12-10 11:34:27.826050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:00.748 [2024-12-10 11:34:27.826061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:00.748 [2024-12-10 11:34:27.826070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:00.748 [2024-12-10 11:34:27.826080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:00.748 [2024-12-10 11:34:27.826091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:00.748 [2024-12-10 11:34:27.826101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.748 [2024-12-10 11:34:27.826112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:00.748 [2024-12-10 11:34:27.826122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:27:00.748 [2024-12-10 11:34:27.826131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.008 [2024-12-10 11:34:27.865269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.008 [2024-12-10 11:34:27.865306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:01.008 [2024-12-10 11:34:27.865319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.156 ms 00:27:01.008 [2024-12-10 11:34:27.865332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.008 [2024-12-10 11:34:27.865420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.008 [2024-12-10 11:34:27.865431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:01.008 [2024-12-10 11:34:27.865442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:01.008 [2024-12-10 11:34:27.865451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.008 [2024-12-10 11:34:27.936665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.008 [2024-12-10 11:34:27.936701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:01.008 [2024-12-10 11:34:27.936714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.263 ms 00:27:01.008 [2024-12-10 11:34:27.936724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.008 [2024-12-10 11:34:27.936781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.008 [2024-12-10 11:34:27.936793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:01.008 [2024-12-10 11:34:27.936808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:01.008 [2024-12-10 11:34:27.936819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.008 [2024-12-10 11:34:27.937358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.008 [2024-12-10 11:34:27.937380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:01.008 [2024-12-10 11:34:27.937391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:27:01.008 [2024-12-10 11:34:27.937401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.008 [2024-12-10 11:34:27.937543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.008 [2024-12-10 11:34:27.937557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:01.008 [2024-12-10 11:34:27.937572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:27:01.008 [2024-12-10 11:34:27.937582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.008 [2024-12-10 11:34:27.956209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.008 [2024-12-10 11:34:27.956245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:01.008 [2024-12-10 11:34:27.956258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.637 ms 00:27:01.008 [2024-12-10 11:34:27.956268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.008 [2024-12-10 11:34:27.974484] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:01.008 [2024-12-10 11:34:27.974520] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:01.008 [2024-12-10 11:34:27.974550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.008 [2024-12-10 11:34:27.974561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:01.008 [2024-12-10 11:34:27.974572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.195 ms 00:27:01.008 [2024-12-10 11:34:27.974581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.008 [2024-12-10 11:34:28.002591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.008 [2024-12-10 11:34:28.002630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:01.008 [2024-12-10 11:34:28.002643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.013 ms 00:27:01.008 [2024-12-10 11:34:28.002653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.008 [2024-12-10 11:34:28.020184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.008 [2024-12-10 11:34:28.020217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:01.008 [2024-12-10 11:34:28.020229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.490 ms 00:27:01.008 [2024-12-10 11:34:28.020238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.008 [2024-12-10 11:34:28.037435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.008 [2024-12-10 11:34:28.037473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:01.008 [2024-12-10 11:34:28.037485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.171 ms 00:27:01.008 [2024-12-10 11:34:28.037495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.008 [2024-12-10 11:34:28.038253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.008 [2024-12-10 11:34:28.038285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:01.008 [2024-12-10 11:34:28.038300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:27:01.008 [2024-12-10 11:34:28.038310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.008 [2024-12-10 11:34:28.119187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.008 [2024-12-10 11:34:28.119263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:01.008 [2024-12-10 11:34:28.119284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.985 ms 00:27:01.008 [2024-12-10 11:34:28.119295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.267 [2024-12-10 11:34:28.129346] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:01.267 [2024-12-10 11:34:28.131668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.267 [2024-12-10 11:34:28.131698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:01.267 [2024-12-10 11:34:28.131711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.348 ms 00:27:01.267 [2024-12-10 11:34:28.131720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.267 [2024-12-10 11:34:28.131813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.267 [2024-12-10 11:34:28.131826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:01.267 [2024-12-10 11:34:28.131841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:01.267 [2024-12-10 11:34:28.131851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.267 [2024-12-10 11:34:28.131920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.267 [2024-12-10 11:34:28.131942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:01.267 [2024-12-10 11:34:28.131954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:01.267 [2024-12-10 11:34:28.131964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.267 [2024-12-10 11:34:28.131985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.267 [2024-12-10 11:34:28.131995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:01.267 [2024-12-10 11:34:28.132005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:01.267 [2024-12-10 11:34:28.132015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.267 [2024-12-10 11:34:28.132053] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:01.267 [2024-12-10 11:34:28.132065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.267 [2024-12-10 11:34:28.132075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:01.267 [2024-12-10 11:34:28.132084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:01.267 [2024-12-10 11:34:28.132094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.267 [2024-12-10 11:34:28.166524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.267 [2024-12-10 11:34:28.166565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:01.267 [2024-12-10 11:34:28.166584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.451 ms 00:27:01.267 [2024-12-10 11:34:28.166594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.267 [2024-12-10 11:34:28.166679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.267 [2024-12-10 11:34:28.166692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:01.267 [2024-12-10 11:34:28.166702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:01.267 [2024-12-10 11:34:28.166712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.267 [2024-12-10 11:34:28.167871] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.909 ms, result 0 00:27:02.205  [2024-12-10T11:34:30.261Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-10T11:34:31.197Z] Copying: 47/1024 [MB] (23 MBps) [2024-12-10T11:34:32.576Z] Copying: 70/1024 [MB] (22 MBps) [2024-12-10T11:34:33.513Z] Copying: 93/1024 [MB] (23 MBps) [2024-12-10T11:34:34.451Z] Copying: 117/1024 [MB] (23 MBps) [2024-12-10T11:34:35.387Z] Copying: 140/1024 [MB] (23 MBps) [2024-12-10T11:34:36.324Z] Copying: 162/1024 [MB] (22 MBps) [2024-12-10T11:34:37.261Z] Copying: 185/1024 [MB] (22 MBps) [2024-12-10T11:34:38.198Z] Copying: 208/1024 [MB] (23 MBps) [2024-12-10T11:34:39.576Z] Copying: 231/1024 [MB] (22 MBps) [2024-12-10T11:34:40.513Z] Copying: 254/1024 [MB] (23 MBps) [2024-12-10T11:34:41.448Z] Copying: 277/1024 [MB] (22 MBps) [2024-12-10T11:34:42.385Z] Copying: 300/1024 [MB] (23 MBps) [2024-12-10T11:34:43.322Z] Copying: 324/1024 [MB] (23 MBps) [2024-12-10T11:34:44.258Z] Copying: 348/1024 [MB] (23 MBps) [2024-12-10T11:34:45.196Z] Copying: 371/1024 [MB] (23 MBps) [2024-12-10T11:34:46.576Z] Copying: 395/1024 [MB] (23 MBps) [2024-12-10T11:34:47.512Z] Copying: 418/1024 [MB] (22 MBps) [2024-12-10T11:34:48.449Z] Copying: 441/1024 [MB] (23 MBps) [2024-12-10T11:34:49.386Z] Copying: 464/1024 [MB] (23 MBps) [2024-12-10T11:34:50.323Z] Copying: 487/1024 [MB] (23 MBps) [2024-12-10T11:34:51.260Z] Copying: 510/1024 [MB] (23 MBps) [2024-12-10T11:34:52.202Z] Copying: 535/1024 [MB] (24 MBps) [2024-12-10T11:34:53.207Z] Copying: 558/1024 [MB] (23 MBps) [2024-12-10T11:34:54.144Z] Copying: 581/1024 [MB] (23 MBps) [2024-12-10T11:34:55.523Z] Copying: 604/1024 [MB] (22 MBps) [2024-12-10T11:34:56.459Z] Copying: 626/1024 [MB] (22 MBps) [2024-12-10T11:34:57.397Z] Copying: 649/1024 [MB] (22 MBps) [2024-12-10T11:34:58.334Z] Copying: 673/1024 [MB] (23 MBps) [2024-12-10T11:34:59.273Z] Copying: 696/1024 [MB] (23 MBps) [2024-12-10T11:35:00.210Z] Copying: 719/1024 [MB] (23 MBps) [2024-12-10T11:35:01.148Z] Copying: 743/1024 [MB] (24 MBps) [2024-12-10T11:35:02.527Z] Copying: 767/1024 [MB] (23 MBps) [2024-12-10T11:35:03.465Z] Copying: 791/1024 [MB] (23 MBps) [2024-12-10T11:35:04.404Z] Copying: 814/1024 [MB] (23 MBps) [2024-12-10T11:35:05.341Z] Copying: 837/1024 [MB] (23 MBps) [2024-12-10T11:35:06.279Z] Copying: 861/1024 [MB] (23 MBps) [2024-12-10T11:35:07.216Z] Copying: 884/1024 [MB] (23 MBps) [2024-12-10T11:35:08.152Z] Copying: 907/1024 [MB] (22 MBps) [2024-12-10T11:35:09.531Z] Copying: 930/1024 [MB] (23 MBps) [2024-12-10T11:35:10.466Z] Copying: 953/1024 [MB] (22 MBps) [2024-12-10T11:35:11.401Z] Copying: 975/1024 [MB] (21 MBps) [2024-12-10T11:35:12.338Z] Copying: 997/1024 [MB] (21 MBps) [2024-12-10T11:35:13.276Z] Copying: 1019/1024 [MB] (22 MBps) [2024-12-10T11:35:13.276Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-10 11:35:12.936579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.162 [2024-12-10 11:35:12.936639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:46.162 [2024-12-10 11:35:12.936677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:46.162 [2024-12-10 11:35:12.936688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.162 [2024-12-10 11:35:12.938317] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:46.162 [2024-12-10 11:35:12.943210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.162 [2024-12-10 11:35:12.943250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:46.162 [2024-12-10 11:35:12.943279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.859 ms 00:27:46.162 [2024-12-10 11:35:12.943289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.162 [2024-12-10 11:35:12.954751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.162 [2024-12-10 11:35:12.954793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:46.162 [2024-12-10 11:35:12.954807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.663 ms 00:27:46.162 [2024-12-10 11:35:12.954839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.162 [2024-12-10 11:35:12.977481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.162 [2024-12-10 11:35:12.977534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:46.162 [2024-12-10 11:35:12.977547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.630 ms 00:27:46.162 [2024-12-10 11:35:12.977557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.162 [2024-12-10 11:35:12.982276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.162 [2024-12-10 11:35:12.982306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:46.162 [2024-12-10 11:35:12.982317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.694 ms 00:27:46.162 [2024-12-10 11:35:12.982347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.162 [2024-12-10 11:35:13.017359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.162 [2024-12-10 11:35:13.017400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:46.162 [2024-12-10 11:35:13.017412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.038 ms 00:27:46.162 [2024-12-10 11:35:13.017421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.162 [2024-12-10 11:35:13.037310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.162 [2024-12-10 11:35:13.037349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:46.162 [2024-12-10 11:35:13.037362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.862 ms 00:27:46.162 [2024-12-10 11:35:13.037372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.162 [2024-12-10 11:35:13.143506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.162 [2024-12-10 11:35:13.143560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:46.162 [2024-12-10 11:35:13.143574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.252 ms 00:27:46.162 [2024-12-10 11:35:13.143585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.162 [2024-12-10 11:35:13.179129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.162 [2024-12-10 11:35:13.179180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:46.162 [2024-12-10 11:35:13.179215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.582 ms 00:27:46.162 [2024-12-10 11:35:13.179229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.162 [2024-12-10 11:35:13.214302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.162 [2024-12-10 11:35:13.214339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:46.162 [2024-12-10 11:35:13.214352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.082 ms 00:27:46.162 [2024-12-10 11:35:13.214361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.162 [2024-12-10 11:35:13.248219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.162 [2024-12-10 11:35:13.248259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:46.162 [2024-12-10 11:35:13.248287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.860 ms 00:27:46.162 [2024-12-10 11:35:13.248296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.423 [2024-12-10 11:35:13.281209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.423 [2024-12-10 11:35:13.281248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:46.423 [2024-12-10 11:35:13.281260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.894 ms 00:27:46.423 [2024-12-10 11:35:13.281269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.423 [2024-12-10 11:35:13.281320] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:46.423 [2024-12-10 11:35:13.281334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 92928 / 261120 wr_cnt: 1 state: open 00:27:46.423 [2024-12-10 11:35:13.281347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.281991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.282002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.282012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.282022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.282033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.282043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.282054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:46.423 [2024-12-10 11:35:13.282064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:46.424 [2024-12-10 11:35:13.282397] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:46.424 [2024-12-10 11:35:13.282406] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d3773636-b766-4115-80d1-23bd7ec89892 00:27:46.424 [2024-12-10 11:35:13.282417] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 92928 00:27:46.424 [2024-12-10 11:35:13.282427] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 93888 00:27:46.424 [2024-12-10 11:35:13.282436] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 92928 00:27:46.424 [2024-12-10 11:35:13.282447] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0103 00:27:46.424 [2024-12-10 11:35:13.282472] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:46.424 [2024-12-10 11:35:13.282482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:46.424 [2024-12-10 11:35:13.282491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:46.424 [2024-12-10 11:35:13.282500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:46.424 [2024-12-10 11:35:13.282509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:46.424 [2024-12-10 11:35:13.282519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.424 [2024-12-10 11:35:13.282528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:46.424 [2024-12-10 11:35:13.282538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.202 ms 00:27:46.424 [2024-12-10 11:35:13.282549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.424 [2024-12-10 11:35:13.301987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.424 [2024-12-10 11:35:13.302020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:46.424 [2024-12-10 11:35:13.302053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.435 ms 00:27:46.424 [2024-12-10 11:35:13.302063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.424 [2024-12-10 11:35:13.302605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.424 [2024-12-10 11:35:13.302622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:46.424 [2024-12-10 11:35:13.302632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:27:46.424 [2024-12-10 11:35:13.302642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.424 [2024-12-10 11:35:13.349675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.424 [2024-12-10 11:35:13.349709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:46.424 [2024-12-10 11:35:13.349737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.424 [2024-12-10 11:35:13.349747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.424 [2024-12-10 11:35:13.349795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.424 [2024-12-10 11:35:13.349805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:46.424 [2024-12-10 11:35:13.349815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.424 [2024-12-10 11:35:13.349824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.424 [2024-12-10 11:35:13.349900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.424 [2024-12-10 11:35:13.349916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:46.424 [2024-12-10 11:35:13.349926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.424 [2024-12-10 11:35:13.349950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.424 [2024-12-10 11:35:13.349966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.424 [2024-12-10 11:35:13.349976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:46.424 [2024-12-10 11:35:13.349986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.424 [2024-12-10 11:35:13.349995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.424 [2024-12-10 11:35:13.466344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.424 [2024-12-10 11:35:13.466400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:46.424 [2024-12-10 11:35:13.466429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.424 [2024-12-10 11:35:13.466440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.684 [2024-12-10 11:35:13.560560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.684 [2024-12-10 11:35:13.560608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:46.684 [2024-12-10 11:35:13.560622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.684 [2024-12-10 11:35:13.560632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.684 [2024-12-10 11:35:13.560728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.684 [2024-12-10 11:35:13.560739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:46.684 [2024-12-10 11:35:13.560750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.684 [2024-12-10 11:35:13.560765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.684 [2024-12-10 11:35:13.560802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.684 [2024-12-10 11:35:13.560821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:46.684 [2024-12-10 11:35:13.560830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.684 [2024-12-10 11:35:13.560840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.684 [2024-12-10 11:35:13.560961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.684 [2024-12-10 11:35:13.560975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:46.684 [2024-12-10 11:35:13.560985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.684 [2024-12-10 11:35:13.560999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.684 [2024-12-10 11:35:13.561051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.684 [2024-12-10 11:35:13.561064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:46.684 [2024-12-10 11:35:13.561074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.684 [2024-12-10 11:35:13.561084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.684 [2024-12-10 11:35:13.561121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.684 [2024-12-10 11:35:13.561132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:46.684 [2024-12-10 11:35:13.561142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.684 [2024-12-10 11:35:13.561152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.684 [2024-12-10 11:35:13.561197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.684 [2024-12-10 11:35:13.561208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:46.684 [2024-12-10 11:35:13.561218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.684 [2024-12-10 11:35:13.561228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.684 [2024-12-10 11:35:13.561356] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 628.138 ms, result 0 00:27:48.064 00:27:48.064 00:27:48.064 11:35:15 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:48.322 [2024-12-10 11:35:15.211175] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:27:48.322 [2024-12-10 11:35:15.211324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80849 ] 00:27:48.322 [2024-12-10 11:35:15.392288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.581 [2024-12-10 11:35:15.495334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.841 [2024-12-10 11:35:15.845060] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:48.841 [2024-12-10 11:35:15.845127] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:49.101 [2024-12-10 11:35:16.004683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.101 [2024-12-10 11:35:16.004736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:49.101 [2024-12-10 11:35:16.004751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:49.101 [2024-12-10 11:35:16.004777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.101 [2024-12-10 11:35:16.004823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.101 [2024-12-10 11:35:16.004838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:49.101 [2024-12-10 11:35:16.004848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:49.101 [2024-12-10 11:35:16.004858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.101 [2024-12-10 11:35:16.004879] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:49.101 [2024-12-10 11:35:16.005816] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:49.101 [2024-12-10 11:35:16.005846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.101 [2024-12-10 11:35:16.005858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:49.101 [2024-12-10 11:35:16.005869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:27:49.101 [2024-12-10 11:35:16.005879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.101 [2024-12-10 11:35:16.007402] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:49.101 [2024-12-10 11:35:16.025361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.101 [2024-12-10 11:35:16.025399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:49.101 [2024-12-10 11:35:16.025412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.988 ms 00:27:49.101 [2024-12-10 11:35:16.025423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.101 [2024-12-10 11:35:16.025494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.102 [2024-12-10 11:35:16.025507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:49.102 [2024-12-10 11:35:16.025517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:49.102 [2024-12-10 11:35:16.025526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.102 [2024-12-10 11:35:16.032318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.102 [2024-12-10 11:35:16.032348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:49.102 [2024-12-10 11:35:16.032359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.734 ms 00:27:49.102 [2024-12-10 11:35:16.032373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.102 [2024-12-10 11:35:16.032442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.102 [2024-12-10 11:35:16.032453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:49.102 [2024-12-10 11:35:16.032463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:49.102 [2024-12-10 11:35:16.032473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.102 [2024-12-10 11:35:16.032509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.102 [2024-12-10 11:35:16.032521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:49.102 [2024-12-10 11:35:16.032531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:49.102 [2024-12-10 11:35:16.032540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.102 [2024-12-10 11:35:16.032566] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:49.102 [2024-12-10 11:35:16.037246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.102 [2024-12-10 11:35:16.037277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:49.102 [2024-12-10 11:35:16.037291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.692 ms 00:27:49.102 [2024-12-10 11:35:16.037317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.102 [2024-12-10 11:35:16.037355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.102 [2024-12-10 11:35:16.037367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:49.102 [2024-12-10 11:35:16.037377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:49.102 [2024-12-10 11:35:16.037387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.102 [2024-12-10 11:35:16.037438] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:49.102 [2024-12-10 11:35:16.037463] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:49.102 [2024-12-10 11:35:16.037507] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:49.102 [2024-12-10 11:35:16.037528] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:49.102 [2024-12-10 11:35:16.037615] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:49.102 [2024-12-10 11:35:16.037628] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:49.102 [2024-12-10 11:35:16.037641] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:49.102 [2024-12-10 11:35:16.037654] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:49.102 [2024-12-10 11:35:16.037665] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:49.102 [2024-12-10 11:35:16.037676] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:49.102 [2024-12-10 11:35:16.037686] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:49.102 [2024-12-10 11:35:16.037699] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:49.102 [2024-12-10 11:35:16.037709] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:49.102 [2024-12-10 11:35:16.037719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.102 [2024-12-10 11:35:16.037729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:49.102 [2024-12-10 11:35:16.037738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:27:49.102 [2024-12-10 11:35:16.037748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.102 [2024-12-10 11:35:16.037817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.102 [2024-12-10 11:35:16.037828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:49.102 [2024-12-10 11:35:16.037838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:49.102 [2024-12-10 11:35:16.037847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.102 [2024-12-10 11:35:16.037947] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:49.102 [2024-12-10 11:35:16.037962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:49.102 [2024-12-10 11:35:16.037973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:49.102 [2024-12-10 11:35:16.037983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.102 [2024-12-10 11:35:16.037993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:49.102 [2024-12-10 11:35:16.038002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:49.102 [2024-12-10 11:35:16.038012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:49.102 [2024-12-10 11:35:16.038022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:49.102 [2024-12-10 11:35:16.038031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:49.102 [2024-12-10 11:35:16.038041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:49.102 [2024-12-10 11:35:16.038051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:49.102 [2024-12-10 11:35:16.038060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:49.102 [2024-12-10 11:35:16.038070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:49.102 [2024-12-10 11:35:16.038090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:49.102 [2024-12-10 11:35:16.038099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:49.102 [2024-12-10 11:35:16.038108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.102 [2024-12-10 11:35:16.038118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:49.102 [2024-12-10 11:35:16.038127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:49.102 [2024-12-10 11:35:16.038137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.102 [2024-12-10 11:35:16.038146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:49.102 [2024-12-10 11:35:16.038155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:49.102 [2024-12-10 11:35:16.038164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.102 [2024-12-10 11:35:16.038173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:49.102 [2024-12-10 11:35:16.038182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:49.102 [2024-12-10 11:35:16.038190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.102 [2024-12-10 11:35:16.038199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:49.102 [2024-12-10 11:35:16.038208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:49.102 [2024-12-10 11:35:16.038216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.102 [2024-12-10 11:35:16.038225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:49.102 [2024-12-10 11:35:16.038234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:49.102 [2024-12-10 11:35:16.038242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.102 [2024-12-10 11:35:16.038251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:49.102 [2024-12-10 11:35:16.038259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:49.102 [2024-12-10 11:35:16.038268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:49.102 [2024-12-10 11:35:16.038276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:49.102 [2024-12-10 11:35:16.038285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:49.102 [2024-12-10 11:35:16.038294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:49.102 [2024-12-10 11:35:16.038303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:49.102 [2024-12-10 11:35:16.038311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:49.102 [2024-12-10 11:35:16.038320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.102 [2024-12-10 11:35:16.038329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:49.102 [2024-12-10 11:35:16.038338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:49.102 [2024-12-10 11:35:16.038347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.102 [2024-12-10 11:35:16.038355] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:49.102 [2024-12-10 11:35:16.038365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:49.102 [2024-12-10 11:35:16.038375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:49.102 [2024-12-10 11:35:16.038384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.102 [2024-12-10 11:35:16.038393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:49.102 [2024-12-10 11:35:16.038402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:49.102 [2024-12-10 11:35:16.038411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:49.102 [2024-12-10 11:35:16.038420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:49.102 [2024-12-10 11:35:16.038428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:49.102 [2024-12-10 11:35:16.038437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:49.102 [2024-12-10 11:35:16.038448] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:49.102 [2024-12-10 11:35:16.038459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:49.102 [2024-12-10 11:35:16.038474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:49.102 [2024-12-10 11:35:16.038483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:49.102 [2024-12-10 11:35:16.038493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:49.103 [2024-12-10 11:35:16.038502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:49.103 [2024-12-10 11:35:16.038512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:49.103 [2024-12-10 11:35:16.038523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:49.103 [2024-12-10 11:35:16.038533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:49.103 [2024-12-10 11:35:16.038544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:49.103 [2024-12-10 11:35:16.038554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:49.103 [2024-12-10 11:35:16.038564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:49.103 [2024-12-10 11:35:16.038575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:49.103 [2024-12-10 11:35:16.038586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:49.103 [2024-12-10 11:35:16.038596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:49.103 [2024-12-10 11:35:16.038607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:49.103 [2024-12-10 11:35:16.038618] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:49.103 [2024-12-10 11:35:16.038629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:49.103 [2024-12-10 11:35:16.038640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:49.103 [2024-12-10 11:35:16.038650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:49.103 [2024-12-10 11:35:16.038660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:49.103 [2024-12-10 11:35:16.038670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:49.103 [2024-12-10 11:35:16.038681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.103 [2024-12-10 11:35:16.038691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:49.103 [2024-12-10 11:35:16.038701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:27:49.103 [2024-12-10 11:35:16.038710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.103 [2024-12-10 11:35:16.079561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.103 [2024-12-10 11:35:16.079596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:49.103 [2024-12-10 11:35:16.079610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.873 ms 00:27:49.103 [2024-12-10 11:35:16.079625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.103 [2024-12-10 11:35:16.079700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.103 [2024-12-10 11:35:16.079710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:49.103 [2024-12-10 11:35:16.079721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:49.103 [2024-12-10 11:35:16.079731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.103 [2024-12-10 11:35:16.149102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.103 [2024-12-10 11:35:16.149139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:49.103 [2024-12-10 11:35:16.149154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.422 ms 00:27:49.103 [2024-12-10 11:35:16.149164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.103 [2024-12-10 11:35:16.149208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.103 [2024-12-10 11:35:16.149219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:49.103 [2024-12-10 11:35:16.149234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:49.103 [2024-12-10 11:35:16.149244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.103 [2024-12-10 11:35:16.149777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.103 [2024-12-10 11:35:16.149801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:49.103 [2024-12-10 11:35:16.149812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:27:49.103 [2024-12-10 11:35:16.149822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.103 [2024-12-10 11:35:16.150057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.103 [2024-12-10 11:35:16.150083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:49.103 [2024-12-10 11:35:16.150098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:27:49.103 [2024-12-10 11:35:16.150109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.103 [2024-12-10 11:35:16.167864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.103 [2024-12-10 11:35:16.167901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:49.103 [2024-12-10 11:35:16.167944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.762 ms 00:27:49.103 [2024-12-10 11:35:16.167955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.103 [2024-12-10 11:35:16.186758] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:49.103 [2024-12-10 11:35:16.186798] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:49.103 [2024-12-10 11:35:16.186813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.103 [2024-12-10 11:35:16.186823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:49.103 [2024-12-10 11:35:16.186834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.782 ms 00:27:49.103 [2024-12-10 11:35:16.186843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.363 [2024-12-10 11:35:16.214731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.363 [2024-12-10 11:35:16.214772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:49.363 [2024-12-10 11:35:16.214786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.890 ms 00:27:49.363 [2024-12-10 11:35:16.214796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.363 [2024-12-10 11:35:16.232019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.363 [2024-12-10 11:35:16.232054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:49.363 [2024-12-10 11:35:16.232067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.204 ms 00:27:49.363 [2024-12-10 11:35:16.232092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.363 [2024-12-10 11:35:16.249283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.363 [2024-12-10 11:35:16.249318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:49.363 [2024-12-10 11:35:16.249330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.181 ms 00:27:49.363 [2024-12-10 11:35:16.249340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.363 [2024-12-10 11:35:16.250108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.363 [2024-12-10 11:35:16.250139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:49.363 [2024-12-10 11:35:16.250154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:27:49.363 [2024-12-10 11:35:16.250164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.363 [2024-12-10 11:35:16.335190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.363 [2024-12-10 11:35:16.335243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:49.363 [2024-12-10 11:35:16.335264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.141 ms 00:27:49.363 [2024-12-10 11:35:16.335274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.363 [2024-12-10 11:35:16.345578] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:49.363 [2024-12-10 11:35:16.348057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.363 [2024-12-10 11:35:16.348087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:49.363 [2024-12-10 11:35:16.348100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.760 ms 00:27:49.363 [2024-12-10 11:35:16.348109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.363 [2024-12-10 11:35:16.348188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.363 [2024-12-10 11:35:16.348201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:49.363 [2024-12-10 11:35:16.348215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:49.363 [2024-12-10 11:35:16.348225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.363 [2024-12-10 11:35:16.349627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.363 [2024-12-10 11:35:16.349670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:49.363 [2024-12-10 11:35:16.349682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.363 ms 00:27:49.363 [2024-12-10 11:35:16.349692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.363 [2024-12-10 11:35:16.349719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.363 [2024-12-10 11:35:16.349730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:49.363 [2024-12-10 11:35:16.349741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:49.363 [2024-12-10 11:35:16.349751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.363 [2024-12-10 11:35:16.349814] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:49.363 [2024-12-10 11:35:16.349828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.363 [2024-12-10 11:35:16.349838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:49.363 [2024-12-10 11:35:16.349848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:49.363 [2024-12-10 11:35:16.349859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.363 [2024-12-10 11:35:16.384156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.363 [2024-12-10 11:35:16.384194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:49.363 [2024-12-10 11:35:16.384213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.333 ms 00:27:49.363 [2024-12-10 11:35:16.384222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.363 [2024-12-10 11:35:16.384288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.363 [2024-12-10 11:35:16.384299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:49.363 [2024-12-10 11:35:16.384310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:49.363 [2024-12-10 11:35:16.384320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.363 [2024-12-10 11:35:16.385393] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 380.879 ms, result 0 00:27:50.771  [2024-12-10T11:35:18.854Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-10T11:35:19.792Z] Copying: 42/1024 [MB] (24 MBps) [2024-12-10T11:35:20.730Z] Copying: 66/1024 [MB] (24 MBps) [2024-12-10T11:35:21.667Z] Copying: 91/1024 [MB] (25 MBps) [2024-12-10T11:35:22.605Z] Copying: 115/1024 [MB] (23 MBps) [2024-12-10T11:35:23.984Z] Copying: 139/1024 [MB] (23 MBps) [2024-12-10T11:35:24.921Z] Copying: 163/1024 [MB] (23 MBps) [2024-12-10T11:35:25.858Z] Copying: 186/1024 [MB] (23 MBps) [2024-12-10T11:35:26.796Z] Copying: 210/1024 [MB] (23 MBps) [2024-12-10T11:35:27.734Z] Copying: 233/1024 [MB] (23 MBps) [2024-12-10T11:35:28.673Z] Copying: 258/1024 [MB] (24 MBps) [2024-12-10T11:35:29.612Z] Copying: 282/1024 [MB] (24 MBps) [2024-12-10T11:35:30.992Z] Copying: 307/1024 [MB] (25 MBps) [2024-12-10T11:35:31.933Z] Copying: 331/1024 [MB] (24 MBps) [2024-12-10T11:35:32.872Z] Copying: 357/1024 [MB] (25 MBps) [2024-12-10T11:35:33.809Z] Copying: 381/1024 [MB] (24 MBps) [2024-12-10T11:35:34.747Z] Copying: 406/1024 [MB] (24 MBps) [2024-12-10T11:35:35.684Z] Copying: 429/1024 [MB] (23 MBps) [2024-12-10T11:35:36.622Z] Copying: 453/1024 [MB] (23 MBps) [2024-12-10T11:35:38.001Z] Copying: 477/1024 [MB] (23 MBps) [2024-12-10T11:35:38.570Z] Copying: 500/1024 [MB] (23 MBps) [2024-12-10T11:35:39.949Z] Copying: 523/1024 [MB] (22 MBps) [2024-12-10T11:35:40.886Z] Copying: 546/1024 [MB] (22 MBps) [2024-12-10T11:35:41.823Z] Copying: 570/1024 [MB] (23 MBps) [2024-12-10T11:35:42.762Z] Copying: 594/1024 [MB] (24 MBps) [2024-12-10T11:35:43.731Z] Copying: 618/1024 [MB] (24 MBps) [2024-12-10T11:35:44.674Z] Copying: 642/1024 [MB] (23 MBps) [2024-12-10T11:35:45.615Z] Copying: 666/1024 [MB] (23 MBps) [2024-12-10T11:35:46.994Z] Copying: 689/1024 [MB] (23 MBps) [2024-12-10T11:35:47.562Z] Copying: 713/1024 [MB] (23 MBps) [2024-12-10T11:35:48.941Z] Copying: 737/1024 [MB] (24 MBps) [2024-12-10T11:35:49.879Z] Copying: 761/1024 [MB] (24 MBps) [2024-12-10T11:35:50.817Z] Copying: 785/1024 [MB] (24 MBps) [2024-12-10T11:35:51.755Z] Copying: 810/1024 [MB] (24 MBps) [2024-12-10T11:35:52.692Z] Copying: 835/1024 [MB] (24 MBps) [2024-12-10T11:35:53.630Z] Copying: 859/1024 [MB] (24 MBps) [2024-12-10T11:35:54.568Z] Copying: 883/1024 [MB] (24 MBps) [2024-12-10T11:35:55.946Z] Copying: 908/1024 [MB] (24 MBps) [2024-12-10T11:35:56.883Z] Copying: 932/1024 [MB] (24 MBps) [2024-12-10T11:35:57.821Z] Copying: 956/1024 [MB] (24 MBps) [2024-12-10T11:35:58.757Z] Copying: 980/1024 [MB] (24 MBps) [2024-12-10T11:35:59.694Z] Copying: 1004/1024 [MB] (24 MBps) [2024-12-10T11:35:59.954Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-10 11:35:59.758061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.840 [2024-12-10 11:35:59.758129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:32.840 [2024-12-10 11:35:59.758152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:32.840 [2024-12-10 11:35:59.758163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.840 [2024-12-10 11:35:59.758188] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:32.840 [2024-12-10 11:35:59.762655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.840 [2024-12-10 11:35:59.762702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:32.840 [2024-12-10 11:35:59.762715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.456 ms 00:28:32.840 [2024-12-10 11:35:59.762726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.840 [2024-12-10 11:35:59.762938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.840 [2024-12-10 11:35:59.762951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:32.840 [2024-12-10 11:35:59.762962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:28:32.840 [2024-12-10 11:35:59.762977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.840 [2024-12-10 11:35:59.768854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.840 [2024-12-10 11:35:59.768893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:32.840 [2024-12-10 11:35:59.768907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.868 ms 00:28:32.840 [2024-12-10 11:35:59.768926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.840 [2024-12-10 11:35:59.774106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.840 [2024-12-10 11:35:59.774140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:32.840 [2024-12-10 11:35:59.774152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.149 ms 00:28:32.840 [2024-12-10 11:35:59.774184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.840 [2024-12-10 11:35:59.810189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.840 [2024-12-10 11:35:59.810226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:32.840 [2024-12-10 11:35:59.810240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.025 ms 00:28:32.840 [2024-12-10 11:35:59.810250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.840 [2024-12-10 11:35:59.830884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.840 [2024-12-10 11:35:59.830926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:32.840 [2024-12-10 11:35:59.830940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.629 ms 00:28:32.840 [2024-12-10 11:35:59.830950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.101 [2024-12-10 11:35:59.986498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.101 [2024-12-10 11:35:59.986549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:33.101 [2024-12-10 11:35:59.986563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 155.758 ms 00:28:33.101 [2024-12-10 11:35:59.986573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.101 [2024-12-10 11:36:00.022719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.101 [2024-12-10 11:36:00.022755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:33.101 [2024-12-10 11:36:00.022769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.187 ms 00:28:33.101 [2024-12-10 11:36:00.022778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.101 [2024-12-10 11:36:00.058007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.101 [2024-12-10 11:36:00.058042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:33.101 [2024-12-10 11:36:00.058054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.249 ms 00:28:33.101 [2024-12-10 11:36:00.058063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.101 [2024-12-10 11:36:00.092467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.101 [2024-12-10 11:36:00.092499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:33.101 [2024-12-10 11:36:00.092512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.423 ms 00:28:33.101 [2024-12-10 11:36:00.092521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.101 [2024-12-10 11:36:00.126928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.101 [2024-12-10 11:36:00.126961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:33.101 [2024-12-10 11:36:00.126973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.389 ms 00:28:33.101 [2024-12-10 11:36:00.126982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.101 [2024-12-10 11:36:00.127017] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:33.101 [2024-12-10 11:36:00.127034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:28:33.101 [2024-12-10 11:36:00.127047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:33.101 [2024-12-10 11:36:00.127368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.127998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.128008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.128018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.128029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.128039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.128050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.128060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.128070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:33.102 [2024-12-10 11:36:00.128089] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:33.102 [2024-12-10 11:36:00.128098] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d3773636-b766-4115-80d1-23bd7ec89892 00:28:33.102 [2024-12-10 11:36:00.128109] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:28:33.102 [2024-12-10 11:36:00.128119] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 39104 00:28:33.102 [2024-12-10 11:36:00.128128] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 38144 00:28:33.102 [2024-12-10 11:36:00.128139] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0252 00:28:33.102 [2024-12-10 11:36:00.128154] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:33.102 [2024-12-10 11:36:00.128174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:33.102 [2024-12-10 11:36:00.128183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:33.102 [2024-12-10 11:36:00.128193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:33.102 [2024-12-10 11:36:00.128201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:33.102 [2024-12-10 11:36:00.128216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.102 [2024-12-10 11:36:00.128226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:33.102 [2024-12-10 11:36:00.128236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.202 ms 00:28:33.102 [2024-12-10 11:36:00.128245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.102 [2024-12-10 11:36:00.147797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.102 [2024-12-10 11:36:00.147828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:33.102 [2024-12-10 11:36:00.147847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.550 ms 00:28:33.102 [2024-12-10 11:36:00.147856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.102 [2024-12-10 11:36:00.148410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.102 [2024-12-10 11:36:00.148427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:33.102 [2024-12-10 11:36:00.148438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:28:33.102 [2024-12-10 11:36:00.148448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.102 [2024-12-10 11:36:00.197760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.102 [2024-12-10 11:36:00.197796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:33.102 [2024-12-10 11:36:00.197809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.102 [2024-12-10 11:36:00.197819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.102 [2024-12-10 11:36:00.197876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.102 [2024-12-10 11:36:00.197887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:33.103 [2024-12-10 11:36:00.197896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.103 [2024-12-10 11:36:00.197906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.103 [2024-12-10 11:36:00.197990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.103 [2024-12-10 11:36:00.198007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:33.103 [2024-12-10 11:36:00.198022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.103 [2024-12-10 11:36:00.198032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.103 [2024-12-10 11:36:00.198048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.103 [2024-12-10 11:36:00.198059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:33.103 [2024-12-10 11:36:00.198068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.103 [2024-12-10 11:36:00.198077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.363 [2024-12-10 11:36:00.315804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.363 [2024-12-10 11:36:00.315885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:33.363 [2024-12-10 11:36:00.315900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.363 [2024-12-10 11:36:00.315910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.363 [2024-12-10 11:36:00.408594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.363 [2024-12-10 11:36:00.408646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:33.363 [2024-12-10 11:36:00.408676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.363 [2024-12-10 11:36:00.408686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.363 [2024-12-10 11:36:00.408775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.363 [2024-12-10 11:36:00.408787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:33.363 [2024-12-10 11:36:00.408797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.363 [2024-12-10 11:36:00.408811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.363 [2024-12-10 11:36:00.408847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.363 [2024-12-10 11:36:00.408857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:33.363 [2024-12-10 11:36:00.408867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.363 [2024-12-10 11:36:00.408876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.363 [2024-12-10 11:36:00.409001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.363 [2024-12-10 11:36:00.409031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:33.363 [2024-12-10 11:36:00.409041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.363 [2024-12-10 11:36:00.409051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.363 [2024-12-10 11:36:00.409090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.363 [2024-12-10 11:36:00.409102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:33.363 [2024-12-10 11:36:00.409112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.363 [2024-12-10 11:36:00.409122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.363 [2024-12-10 11:36:00.409159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.363 [2024-12-10 11:36:00.409170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:33.363 [2024-12-10 11:36:00.409180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.363 [2024-12-10 11:36:00.409189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.363 [2024-12-10 11:36:00.409234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.363 [2024-12-10 11:36:00.409246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:33.363 [2024-12-10 11:36:00.409256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.363 [2024-12-10 11:36:00.409266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.363 [2024-12-10 11:36:00.409414] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 652.357 ms, result 0 00:28:34.300 00:28:34.300 00:28:34.300 11:36:01 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:36.205 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:36.205 11:36:03 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:36.205 11:36:03 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:28:36.205 11:36:03 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:36.205 11:36:03 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:36.205 11:36:03 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:36.205 Process with pid 79168 is not found 00:28:36.205 Remove shared memory files 00:28:36.205 11:36:03 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79168 00:28:36.205 11:36:03 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79168 ']' 00:28:36.205 11:36:03 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79168 00:28:36.205 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79168) - No such process 00:28:36.205 11:36:03 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79168 is not found' 00:28:36.205 11:36:03 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:28:36.205 11:36:03 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:36.205 11:36:03 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:28:36.205 11:36:03 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:28:36.205 11:36:03 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:28:36.205 11:36:03 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:36.205 11:36:03 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:28:36.205 ************************************ 00:28:36.205 END TEST ftl_restore 00:28:36.205 ************************************ 00:28:36.205 00:28:36.205 real 3m30.831s 00:28:36.205 user 3m17.514s 00:28:36.205 sys 0m13.678s 00:28:36.205 11:36:03 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.206 11:36:03 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:28:36.206 11:36:03 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:36.206 11:36:03 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:36.206 11:36:03 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.206 11:36:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:36.206 ************************************ 00:28:36.206 START TEST ftl_dirty_shutdown 00:28:36.206 ************************************ 00:28:36.206 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:36.466 * Looking for test storage... 00:28:36.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:36.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.466 --rc genhtml_branch_coverage=1 00:28:36.466 --rc genhtml_function_coverage=1 00:28:36.466 --rc genhtml_legend=1 00:28:36.466 --rc geninfo_all_blocks=1 00:28:36.466 --rc geninfo_unexecuted_blocks=1 00:28:36.466 00:28:36.466 ' 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:36.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.466 --rc genhtml_branch_coverage=1 00:28:36.466 --rc genhtml_function_coverage=1 00:28:36.466 --rc genhtml_legend=1 00:28:36.466 --rc geninfo_all_blocks=1 00:28:36.466 --rc geninfo_unexecuted_blocks=1 00:28:36.466 00:28:36.466 ' 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:36.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.466 --rc genhtml_branch_coverage=1 00:28:36.466 --rc genhtml_function_coverage=1 00:28:36.466 --rc genhtml_legend=1 00:28:36.466 --rc geninfo_all_blocks=1 00:28:36.466 --rc geninfo_unexecuted_blocks=1 00:28:36.466 00:28:36.466 ' 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:36.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.466 --rc genhtml_branch_coverage=1 00:28:36.466 --rc genhtml_function_coverage=1 00:28:36.466 --rc genhtml_legend=1 00:28:36.466 --rc geninfo_all_blocks=1 00:28:36.466 --rc geninfo_unexecuted_blocks=1 00:28:36.466 00:28:36.466 ' 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81407 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81407 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81407 ']' 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.466 11:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:36.726 [2024-12-10 11:36:03.699805] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:28:36.726 [2024-12-10 11:36:03.699947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81407 ] 00:28:36.985 [2024-12-10 11:36:03.885718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.985 [2024-12-10 11:36:03.993088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.923 11:36:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.923 11:36:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:37.923 11:36:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:37.923 11:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:28:37.923 11:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:37.923 11:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:28:37.923 11:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:37.923 11:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:38.185 11:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:38.185 11:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:38.185 11:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:38.185 11:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:38.185 11:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:38.185 11:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:38.185 11:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:38.185 11:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:38.445 11:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:38.445 { 00:28:38.445 "name": "nvme0n1", 00:28:38.445 "aliases": [ 00:28:38.445 "2c21af74-15e8-4f17-a13c-d8f1867ecd76" 00:28:38.445 ], 00:28:38.445 "product_name": "NVMe disk", 00:28:38.445 "block_size": 4096, 00:28:38.445 "num_blocks": 1310720, 00:28:38.445 "uuid": "2c21af74-15e8-4f17-a13c-d8f1867ecd76", 00:28:38.445 "numa_id": -1, 00:28:38.445 "assigned_rate_limits": { 00:28:38.445 "rw_ios_per_sec": 0, 00:28:38.445 "rw_mbytes_per_sec": 0, 00:28:38.445 "r_mbytes_per_sec": 0, 00:28:38.445 "w_mbytes_per_sec": 0 00:28:38.445 }, 00:28:38.445 "claimed": true, 00:28:38.445 "claim_type": "read_many_write_one", 00:28:38.445 "zoned": false, 00:28:38.445 "supported_io_types": { 00:28:38.445 "read": true, 00:28:38.445 "write": true, 00:28:38.445 "unmap": true, 00:28:38.445 "flush": true, 00:28:38.445 "reset": true, 00:28:38.445 "nvme_admin": true, 00:28:38.445 "nvme_io": true, 00:28:38.445 "nvme_io_md": false, 00:28:38.445 "write_zeroes": true, 00:28:38.445 "zcopy": false, 00:28:38.445 "get_zone_info": false, 00:28:38.445 "zone_management": false, 00:28:38.445 "zone_append": false, 00:28:38.445 "compare": true, 00:28:38.445 "compare_and_write": false, 00:28:38.445 "abort": true, 00:28:38.445 "seek_hole": false, 00:28:38.445 "seek_data": false, 00:28:38.445 "copy": true, 00:28:38.445 "nvme_iov_md": false 00:28:38.445 }, 00:28:38.445 "driver_specific": { 00:28:38.445 "nvme": [ 00:28:38.445 { 00:28:38.445 "pci_address": "0000:00:11.0", 00:28:38.445 "trid": { 00:28:38.445 "trtype": "PCIe", 00:28:38.445 "traddr": "0000:00:11.0" 00:28:38.445 }, 00:28:38.445 "ctrlr_data": { 00:28:38.445 "cntlid": 0, 00:28:38.445 "vendor_id": "0x1b36", 00:28:38.445 "model_number": "QEMU NVMe Ctrl", 00:28:38.445 "serial_number": "12341", 00:28:38.445 "firmware_revision": "8.0.0", 00:28:38.445 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:38.445 "oacs": { 00:28:38.445 "security": 0, 00:28:38.445 "format": 1, 00:28:38.445 "firmware": 0, 00:28:38.445 "ns_manage": 1 00:28:38.445 }, 00:28:38.445 "multi_ctrlr": false, 00:28:38.445 "ana_reporting": false 00:28:38.445 }, 00:28:38.445 "vs": { 00:28:38.445 "nvme_version": "1.4" 00:28:38.445 }, 00:28:38.445 "ns_data": { 00:28:38.445 "id": 1, 00:28:38.445 "can_share": false 00:28:38.445 } 00:28:38.445 } 00:28:38.445 ], 00:28:38.445 "mp_policy": "active_passive" 00:28:38.445 } 00:28:38.445 } 00:28:38.445 ]' 00:28:38.445 11:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:38.445 11:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:38.445 11:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:38.445 11:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:38.445 11:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:38.445 11:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:38.445 11:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:38.445 11:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:38.445 11:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:38.445 11:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:38.445 11:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:38.704 11:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=1cd90635-26e9-4370-9434-52315d0c6900 00:28:38.704 11:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:38.704 11:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1cd90635-26e9-4370-9434-52315d0c6900 00:28:38.961 11:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:39.220 11:36:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=4f4b279e-b1f5-4354-b1ba-30fe383e8b5c 00:28:39.220 11:36:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4f4b279e-b1f5-4354-b1ba-30fe383e8b5c 00:28:39.220 11:36:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=49345b66-57df-42e6-a913-bcce5a134103 00:28:39.220 11:36:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:28:39.220 11:36:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 49345b66-57df-42e6-a913-bcce5a134103 00:28:39.220 11:36:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:28:39.220 11:36:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:39.220 11:36:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=49345b66-57df-42e6-a913-bcce5a134103 00:28:39.220 11:36:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:28:39.220 11:36:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 49345b66-57df-42e6-a913-bcce5a134103 00:28:39.220 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=49345b66-57df-42e6-a913-bcce5a134103 00:28:39.220 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:39.220 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:39.220 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:39.220 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 49345b66-57df-42e6-a913-bcce5a134103 00:28:39.480 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:39.480 { 00:28:39.480 "name": "49345b66-57df-42e6-a913-bcce5a134103", 00:28:39.480 "aliases": [ 00:28:39.480 "lvs/nvme0n1p0" 00:28:39.480 ], 00:28:39.480 "product_name": "Logical Volume", 00:28:39.480 "block_size": 4096, 00:28:39.480 "num_blocks": 26476544, 00:28:39.480 "uuid": "49345b66-57df-42e6-a913-bcce5a134103", 00:28:39.480 "assigned_rate_limits": { 00:28:39.480 "rw_ios_per_sec": 0, 00:28:39.480 "rw_mbytes_per_sec": 0, 00:28:39.480 "r_mbytes_per_sec": 0, 00:28:39.480 "w_mbytes_per_sec": 0 00:28:39.480 }, 00:28:39.480 "claimed": false, 00:28:39.480 "zoned": false, 00:28:39.480 "supported_io_types": { 00:28:39.480 "read": true, 00:28:39.480 "write": true, 00:28:39.480 "unmap": true, 00:28:39.480 "flush": false, 00:28:39.480 "reset": true, 00:28:39.480 "nvme_admin": false, 00:28:39.480 "nvme_io": false, 00:28:39.480 "nvme_io_md": false, 00:28:39.480 "write_zeroes": true, 00:28:39.480 "zcopy": false, 00:28:39.480 "get_zone_info": false, 00:28:39.480 "zone_management": false, 00:28:39.480 "zone_append": false, 00:28:39.480 "compare": false, 00:28:39.480 "compare_and_write": false, 00:28:39.480 "abort": false, 00:28:39.480 "seek_hole": true, 00:28:39.480 "seek_data": true, 00:28:39.480 "copy": false, 00:28:39.480 "nvme_iov_md": false 00:28:39.480 }, 00:28:39.480 "driver_specific": { 00:28:39.480 "lvol": { 00:28:39.480 "lvol_store_uuid": "4f4b279e-b1f5-4354-b1ba-30fe383e8b5c", 00:28:39.480 "base_bdev": "nvme0n1", 00:28:39.480 "thin_provision": true, 00:28:39.480 "num_allocated_clusters": 0, 00:28:39.480 "snapshot": false, 00:28:39.480 "clone": false, 00:28:39.480 "esnap_clone": false 00:28:39.480 } 00:28:39.480 } 00:28:39.480 } 00:28:39.480 ]' 00:28:39.480 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:39.480 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:39.480 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:39.740 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:39.740 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:39.740 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:39.740 11:36:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:28:39.740 11:36:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:39.740 11:36:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:39.999 11:36:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:39.999 11:36:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:39.999 11:36:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 49345b66-57df-42e6-a913-bcce5a134103 00:28:39.999 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=49345b66-57df-42e6-a913-bcce5a134103 00:28:39.999 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:39.999 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:39.999 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:39.999 11:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 49345b66-57df-42e6-a913-bcce5a134103 00:28:39.999 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:39.999 { 00:28:39.999 "name": "49345b66-57df-42e6-a913-bcce5a134103", 00:28:39.999 "aliases": [ 00:28:39.999 "lvs/nvme0n1p0" 00:28:39.999 ], 00:28:39.999 "product_name": "Logical Volume", 00:28:39.999 "block_size": 4096, 00:28:39.999 "num_blocks": 26476544, 00:28:39.999 "uuid": "49345b66-57df-42e6-a913-bcce5a134103", 00:28:39.999 "assigned_rate_limits": { 00:28:39.999 "rw_ios_per_sec": 0, 00:28:39.999 "rw_mbytes_per_sec": 0, 00:28:39.999 "r_mbytes_per_sec": 0, 00:28:39.999 "w_mbytes_per_sec": 0 00:28:39.999 }, 00:28:39.999 "claimed": false, 00:28:39.999 "zoned": false, 00:28:39.999 "supported_io_types": { 00:28:39.999 "read": true, 00:28:39.999 "write": true, 00:28:39.999 "unmap": true, 00:28:39.999 "flush": false, 00:28:39.999 "reset": true, 00:28:39.999 "nvme_admin": false, 00:28:39.999 "nvme_io": false, 00:28:39.999 "nvme_io_md": false, 00:28:39.999 "write_zeroes": true, 00:28:39.999 "zcopy": false, 00:28:39.999 "get_zone_info": false, 00:28:39.999 "zone_management": false, 00:28:39.999 "zone_append": false, 00:28:39.999 "compare": false, 00:28:39.999 "compare_and_write": false, 00:28:39.999 "abort": false, 00:28:39.999 "seek_hole": true, 00:28:39.999 "seek_data": true, 00:28:39.999 "copy": false, 00:28:39.999 "nvme_iov_md": false 00:28:39.999 }, 00:28:39.999 "driver_specific": { 00:28:39.999 "lvol": { 00:28:39.999 "lvol_store_uuid": "4f4b279e-b1f5-4354-b1ba-30fe383e8b5c", 00:28:39.999 "base_bdev": "nvme0n1", 00:28:39.999 "thin_provision": true, 00:28:39.999 "num_allocated_clusters": 0, 00:28:39.999 "snapshot": false, 00:28:39.999 "clone": false, 00:28:39.999 "esnap_clone": false 00:28:39.999 } 00:28:39.999 } 00:28:39.999 } 00:28:39.999 ]' 00:28:39.999 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:40.259 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:40.259 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:40.259 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:40.259 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:40.259 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:40.259 11:36:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:28:40.259 11:36:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:40.259 11:36:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:28:40.259 11:36:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 49345b66-57df-42e6-a913-bcce5a134103 00:28:40.259 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=49345b66-57df-42e6-a913-bcce5a134103 00:28:40.259 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:40.259 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:40.259 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:40.259 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 49345b66-57df-42e6-a913-bcce5a134103 00:28:40.518 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:40.518 { 00:28:40.518 "name": "49345b66-57df-42e6-a913-bcce5a134103", 00:28:40.518 "aliases": [ 00:28:40.518 "lvs/nvme0n1p0" 00:28:40.518 ], 00:28:40.518 "product_name": "Logical Volume", 00:28:40.518 "block_size": 4096, 00:28:40.518 "num_blocks": 26476544, 00:28:40.518 "uuid": "49345b66-57df-42e6-a913-bcce5a134103", 00:28:40.518 "assigned_rate_limits": { 00:28:40.518 "rw_ios_per_sec": 0, 00:28:40.518 "rw_mbytes_per_sec": 0, 00:28:40.518 "r_mbytes_per_sec": 0, 00:28:40.518 "w_mbytes_per_sec": 0 00:28:40.518 }, 00:28:40.518 "claimed": false, 00:28:40.518 "zoned": false, 00:28:40.518 "supported_io_types": { 00:28:40.518 "read": true, 00:28:40.518 "write": true, 00:28:40.518 "unmap": true, 00:28:40.518 "flush": false, 00:28:40.518 "reset": true, 00:28:40.518 "nvme_admin": false, 00:28:40.518 "nvme_io": false, 00:28:40.518 "nvme_io_md": false, 00:28:40.518 "write_zeroes": true, 00:28:40.518 "zcopy": false, 00:28:40.518 "get_zone_info": false, 00:28:40.518 "zone_management": false, 00:28:40.518 "zone_append": false, 00:28:40.518 "compare": false, 00:28:40.518 "compare_and_write": false, 00:28:40.518 "abort": false, 00:28:40.518 "seek_hole": true, 00:28:40.518 "seek_data": true, 00:28:40.518 "copy": false, 00:28:40.518 "nvme_iov_md": false 00:28:40.518 }, 00:28:40.518 "driver_specific": { 00:28:40.518 "lvol": { 00:28:40.518 "lvol_store_uuid": "4f4b279e-b1f5-4354-b1ba-30fe383e8b5c", 00:28:40.518 "base_bdev": "nvme0n1", 00:28:40.518 "thin_provision": true, 00:28:40.518 "num_allocated_clusters": 0, 00:28:40.518 "snapshot": false, 00:28:40.518 "clone": false, 00:28:40.518 "esnap_clone": false 00:28:40.518 } 00:28:40.518 } 00:28:40.518 } 00:28:40.518 ]' 00:28:40.518 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:40.518 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:40.518 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:40.779 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:40.779 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:40.779 11:36:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:40.779 11:36:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:28:40.779 11:36:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 49345b66-57df-42e6-a913-bcce5a134103 --l2p_dram_limit 10' 00:28:40.779 11:36:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:28:40.779 11:36:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:28:40.779 11:36:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:40.779 11:36:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 49345b66-57df-42e6-a913-bcce5a134103 --l2p_dram_limit 10 -c nvc0n1p0 00:28:40.779 [2024-12-10 11:36:07.838594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.779 [2024-12-10 11:36:07.838641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:40.779 [2024-12-10 11:36:07.838660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:40.779 [2024-12-10 11:36:07.838670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.779 [2024-12-10 11:36:07.838745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.779 [2024-12-10 11:36:07.838757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:40.779 [2024-12-10 11:36:07.838770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:40.779 [2024-12-10 11:36:07.838780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.779 [2024-12-10 11:36:07.838809] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:40.779 [2024-12-10 11:36:07.839780] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:40.779 [2024-12-10 11:36:07.839815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.780 [2024-12-10 11:36:07.839826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:40.780 [2024-12-10 11:36:07.839840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:28:40.780 [2024-12-10 11:36:07.839851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.780 [2024-12-10 11:36:07.839945] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d171327a-a22c-46d0-a605-b88a158f0097 00:28:40.780 [2024-12-10 11:36:07.841352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.780 [2024-12-10 11:36:07.841414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:40.780 [2024-12-10 11:36:07.841427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:40.780 [2024-12-10 11:36:07.841440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.780 [2024-12-10 11:36:07.849080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.780 [2024-12-10 11:36:07.849117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:40.780 [2024-12-10 11:36:07.849129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.608 ms 00:28:40.780 [2024-12-10 11:36:07.849142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.780 [2024-12-10 11:36:07.849234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.780 [2024-12-10 11:36:07.849251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:40.780 [2024-12-10 11:36:07.849262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:28:40.780 [2024-12-10 11:36:07.849279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.780 [2024-12-10 11:36:07.849343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.780 [2024-12-10 11:36:07.849360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:40.780 [2024-12-10 11:36:07.849373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:40.780 [2024-12-10 11:36:07.849386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.780 [2024-12-10 11:36:07.849409] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:40.780 [2024-12-10 11:36:07.854420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.780 [2024-12-10 11:36:07.854457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:40.780 [2024-12-10 11:36:07.854490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.022 ms 00:28:40.780 [2024-12-10 11:36:07.854500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.780 [2024-12-10 11:36:07.854540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.780 [2024-12-10 11:36:07.854551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:40.780 [2024-12-10 11:36:07.854564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:40.780 [2024-12-10 11:36:07.854574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.780 [2024-12-10 11:36:07.854611] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:40.780 [2024-12-10 11:36:07.854749] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:40.780 [2024-12-10 11:36:07.854769] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:40.780 [2024-12-10 11:36:07.854782] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:40.780 [2024-12-10 11:36:07.854813] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:40.780 [2024-12-10 11:36:07.854825] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:40.780 [2024-12-10 11:36:07.854839] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:40.780 [2024-12-10 11:36:07.854849] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:40.780 [2024-12-10 11:36:07.854866] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:40.780 [2024-12-10 11:36:07.854876] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:40.780 [2024-12-10 11:36:07.854889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.780 [2024-12-10 11:36:07.854909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:40.780 [2024-12-10 11:36:07.854923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:28:40.780 [2024-12-10 11:36:07.854933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.780 [2024-12-10 11:36:07.855021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.780 [2024-12-10 11:36:07.855033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:40.780 [2024-12-10 11:36:07.855045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:40.780 [2024-12-10 11:36:07.855055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.780 [2024-12-10 11:36:07.855144] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:40.780 [2024-12-10 11:36:07.855160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:40.780 [2024-12-10 11:36:07.855173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:40.780 [2024-12-10 11:36:07.855183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.780 [2024-12-10 11:36:07.855196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:40.780 [2024-12-10 11:36:07.855205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:40.780 [2024-12-10 11:36:07.855218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:40.780 [2024-12-10 11:36:07.855227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:40.780 [2024-12-10 11:36:07.855239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:40.780 [2024-12-10 11:36:07.855248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:40.780 [2024-12-10 11:36:07.855260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:40.780 [2024-12-10 11:36:07.855269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:40.780 [2024-12-10 11:36:07.855283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:40.780 [2024-12-10 11:36:07.855292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:40.780 [2024-12-10 11:36:07.855304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:40.780 [2024-12-10 11:36:07.855313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.780 [2024-12-10 11:36:07.855329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:40.780 [2024-12-10 11:36:07.855338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:40.780 [2024-12-10 11:36:07.855350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.780 [2024-12-10 11:36:07.855359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:40.780 [2024-12-10 11:36:07.855371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:40.780 [2024-12-10 11:36:07.855380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.780 [2024-12-10 11:36:07.855392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:40.780 [2024-12-10 11:36:07.855401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:40.780 [2024-12-10 11:36:07.855413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.780 [2024-12-10 11:36:07.855422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:40.780 [2024-12-10 11:36:07.855433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:40.780 [2024-12-10 11:36:07.855442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.780 [2024-12-10 11:36:07.855454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:40.780 [2024-12-10 11:36:07.855463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:40.780 [2024-12-10 11:36:07.855475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.780 [2024-12-10 11:36:07.855484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:40.780 [2024-12-10 11:36:07.855498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:40.780 [2024-12-10 11:36:07.855507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:40.780 [2024-12-10 11:36:07.855518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:40.780 [2024-12-10 11:36:07.855527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:40.780 [2024-12-10 11:36:07.855538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:40.780 [2024-12-10 11:36:07.855547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:40.780 [2024-12-10 11:36:07.855560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:40.780 [2024-12-10 11:36:07.855569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.780 [2024-12-10 11:36:07.855581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:40.780 [2024-12-10 11:36:07.855591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:40.780 [2024-12-10 11:36:07.855602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.780 [2024-12-10 11:36:07.855611] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:40.780 [2024-12-10 11:36:07.855623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:40.780 [2024-12-10 11:36:07.855633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:40.780 [2024-12-10 11:36:07.855645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.780 [2024-12-10 11:36:07.855655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:40.780 [2024-12-10 11:36:07.855669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:40.780 [2024-12-10 11:36:07.855679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:40.780 [2024-12-10 11:36:07.855692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:40.780 [2024-12-10 11:36:07.855700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:40.780 [2024-12-10 11:36:07.855712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:40.780 [2024-12-10 11:36:07.855723] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:40.780 [2024-12-10 11:36:07.855740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:40.780 [2024-12-10 11:36:07.855751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:40.780 [2024-12-10 11:36:07.855764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:40.780 [2024-12-10 11:36:07.855774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:40.780 [2024-12-10 11:36:07.855787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:40.781 [2024-12-10 11:36:07.855797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:40.781 [2024-12-10 11:36:07.855810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:40.781 [2024-12-10 11:36:07.855820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:40.781 [2024-12-10 11:36:07.855833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:40.781 [2024-12-10 11:36:07.855843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:40.781 [2024-12-10 11:36:07.855859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:40.781 [2024-12-10 11:36:07.855869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:40.781 [2024-12-10 11:36:07.855882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:40.781 [2024-12-10 11:36:07.855892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:40.781 [2024-12-10 11:36:07.855904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:40.781 [2024-12-10 11:36:07.855922] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:40.781 [2024-12-10 11:36:07.855936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:40.781 [2024-12-10 11:36:07.855947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:40.781 [2024-12-10 11:36:07.855960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:40.781 [2024-12-10 11:36:07.855970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:40.781 [2024-12-10 11:36:07.855982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:40.781 [2024-12-10 11:36:07.855993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.781 [2024-12-10 11:36:07.856006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:40.781 [2024-12-10 11:36:07.856016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:28:40.781 [2024-12-10 11:36:07.856028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.781 [2024-12-10 11:36:07.856068] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:40.781 [2024-12-10 11:36:07.856086] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:45.050 [2024-12-10 11:36:11.680863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:11.680958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:45.050 [2024-12-10 11:36:11.680978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3831.004 ms 00:28:45.050 [2024-12-10 11:36:11.680992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.050 [2024-12-10 11:36:11.716483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:11.716535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:45.050 [2024-12-10 11:36:11.716551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.233 ms 00:28:45.050 [2024-12-10 11:36:11.716564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.050 [2024-12-10 11:36:11.716704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:11.716720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:45.050 [2024-12-10 11:36:11.716732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:45.050 [2024-12-10 11:36:11.716751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.050 [2024-12-10 11:36:11.761935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:11.761976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:45.050 [2024-12-10 11:36:11.762005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.216 ms 00:28:45.050 [2024-12-10 11:36:11.762019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.050 [2024-12-10 11:36:11.762053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:11.762071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:45.050 [2024-12-10 11:36:11.762082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:45.050 [2024-12-10 11:36:11.762104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.050 [2024-12-10 11:36:11.762591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:11.762614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:45.050 [2024-12-10 11:36:11.762625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:28:45.050 [2024-12-10 11:36:11.762637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.050 [2024-12-10 11:36:11.762733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:11.762747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:45.050 [2024-12-10 11:36:11.762760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:28:45.050 [2024-12-10 11:36:11.762776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.050 [2024-12-10 11:36:11.783079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:11.783116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:45.050 [2024-12-10 11:36:11.783145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.316 ms 00:28:45.050 [2024-12-10 11:36:11.783158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.050 [2024-12-10 11:36:11.818851] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:45.050 [2024-12-10 11:36:11.823083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:11.823114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:45.050 [2024-12-10 11:36:11.823133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.886 ms 00:28:45.050 [2024-12-10 11:36:11.823145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.050 [2024-12-10 11:36:11.923955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:11.924003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:45.050 [2024-12-10 11:36:11.924022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.923 ms 00:28:45.050 [2024-12-10 11:36:11.924034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.050 [2024-12-10 11:36:11.924214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:11.924230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:45.050 [2024-12-10 11:36:11.924247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:28:45.050 [2024-12-10 11:36:11.924257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.050 [2024-12-10 11:36:11.960181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:11.960217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:45.050 [2024-12-10 11:36:11.960234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.925 ms 00:28:45.050 [2024-12-10 11:36:11.960245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.050 [2024-12-10 11:36:11.994323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:11.994356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:45.050 [2024-12-10 11:36:11.994372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.085 ms 00:28:45.050 [2024-12-10 11:36:11.994381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.050 [2024-12-10 11:36:11.995019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:11.995039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:45.050 [2024-12-10 11:36:11.995052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:28:45.050 [2024-12-10 11:36:11.995065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.050 [2024-12-10 11:36:12.095566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:12.095601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:45.050 [2024-12-10 11:36:12.095620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.594 ms 00:28:45.050 [2024-12-10 11:36:12.095630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.050 [2024-12-10 11:36:12.130504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.050 [2024-12-10 11:36:12.130538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:45.050 [2024-12-10 11:36:12.130554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.836 ms 00:28:45.050 [2024-12-10 11:36:12.130563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.310 [2024-12-10 11:36:12.164525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.310 [2024-12-10 11:36:12.164556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:45.310 [2024-12-10 11:36:12.164571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.974 ms 00:28:45.310 [2024-12-10 11:36:12.164580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.310 [2024-12-10 11:36:12.199314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.310 [2024-12-10 11:36:12.199347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:45.310 [2024-12-10 11:36:12.199363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.749 ms 00:28:45.310 [2024-12-10 11:36:12.199372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.310 [2024-12-10 11:36:12.199416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.310 [2024-12-10 11:36:12.199427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:45.310 [2024-12-10 11:36:12.199443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:45.310 [2024-12-10 11:36:12.199452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.310 [2024-12-10 11:36:12.199565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.310 [2024-12-10 11:36:12.199581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:45.310 [2024-12-10 11:36:12.199593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:45.310 [2024-12-10 11:36:12.199602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.310 [2024-12-10 11:36:12.200895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4368.965 ms, result 0 00:28:45.310 { 00:28:45.310 "name": "ftl0", 00:28:45.310 "uuid": "d171327a-a22c-46d0-a605-b88a158f0097" 00:28:45.310 } 00:28:45.310 11:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:28:45.310 11:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:45.569 11:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:28:45.569 11:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:28:45.569 11:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:28:45.569 /dev/nbd0 00:28:45.569 11:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:28:45.569 11:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:45.569 11:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:28:45.569 11:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:45.569 11:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:45.569 11:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:45.569 11:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:28:45.569 11:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:45.569 11:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:45.569 11:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:28:45.569 1+0 records in 00:28:45.569 1+0 records out 00:28:45.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350415 s, 11.7 MB/s 00:28:45.829 11:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:45.829 11:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:28:45.829 11:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:45.829 11:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:45.829 11:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:28:45.829 11:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:28:45.829 [2024-12-10 11:36:12.787216] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:28:45.829 [2024-12-10 11:36:12.787323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81555 ] 00:28:46.088 [2024-12-10 11:36:12.967850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.089 [2024-12-10 11:36:13.088581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.468  [2024-12-10T11:36:15.519Z] Copying: 209/1024 [MB] (209 MBps) [2024-12-10T11:36:16.455Z] Copying: 419/1024 [MB] (209 MBps) [2024-12-10T11:36:17.835Z] Copying: 630/1024 [MB] (211 MBps) [2024-12-10T11:36:18.403Z] Copying: 837/1024 [MB] (207 MBps) [2024-12-10T11:36:19.783Z] Copying: 1024/1024 [MB] (average 208 MBps) 00:28:52.669 00:28:52.669 11:36:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:54.574 11:36:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:28:54.574 [2024-12-10 11:36:21.315671] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:28:54.574 [2024-12-10 11:36:21.315801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81648 ] 00:28:54.574 [2024-12-10 11:36:21.499582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.574 [2024-12-10 11:36:21.624733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.952  [2024-12-10T11:36:24.003Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-10T11:36:25.380Z] Copying: 30/1024 [MB] (15 MBps) [2024-12-10T11:36:26.315Z] Copying: 46/1024 [MB] (15 MBps) [2024-12-10T11:36:27.251Z] Copying: 62/1024 [MB] (16 MBps) [2024-12-10T11:36:28.189Z] Copying: 78/1024 [MB] (16 MBps) [2024-12-10T11:36:29.124Z] Copying: 95/1024 [MB] (16 MBps) [2024-12-10T11:36:30.060Z] Copying: 111/1024 [MB] (16 MBps) [2024-12-10T11:36:30.995Z] Copying: 127/1024 [MB] (16 MBps) [2024-12-10T11:36:32.370Z] Copying: 143/1024 [MB] (15 MBps) [2024-12-10T11:36:33.307Z] Copying: 159/1024 [MB] (15 MBps) [2024-12-10T11:36:34.244Z] Copying: 175/1024 [MB] (16 MBps) [2024-12-10T11:36:35.180Z] Copying: 191/1024 [MB] (15 MBps) [2024-12-10T11:36:36.115Z] Copying: 206/1024 [MB] (15 MBps) [2024-12-10T11:36:37.124Z] Copying: 222/1024 [MB] (15 MBps) [2024-12-10T11:36:38.062Z] Copying: 239/1024 [MB] (16 MBps) [2024-12-10T11:36:38.998Z] Copying: 255/1024 [MB] (16 MBps) [2024-12-10T11:36:40.376Z] Copying: 270/1024 [MB] (15 MBps) [2024-12-10T11:36:41.313Z] Copying: 285/1024 [MB] (14 MBps) [2024-12-10T11:36:42.251Z] Copying: 301/1024 [MB] (15 MBps) [2024-12-10T11:36:43.188Z] Copying: 316/1024 [MB] (15 MBps) [2024-12-10T11:36:44.125Z] Copying: 332/1024 [MB] (16 MBps) [2024-12-10T11:36:45.062Z] Copying: 348/1024 [MB] (16 MBps) [2024-12-10T11:36:45.996Z] Copying: 364/1024 [MB] (16 MBps) [2024-12-10T11:36:47.375Z] Copying: 381/1024 [MB] (16 MBps) [2024-12-10T11:36:48.312Z] Copying: 398/1024 [MB] (16 MBps) [2024-12-10T11:36:49.248Z] Copying: 414/1024 [MB] (16 MBps) [2024-12-10T11:36:50.185Z] Copying: 431/1024 [MB] (16 MBps) [2024-12-10T11:36:51.123Z] Copying: 447/1024 [MB] (16 MBps) [2024-12-10T11:36:52.060Z] Copying: 463/1024 [MB] (16 MBps) [2024-12-10T11:36:52.997Z] Copying: 480/1024 [MB] (16 MBps) [2024-12-10T11:36:53.934Z] Copying: 497/1024 [MB] (16 MBps) [2024-12-10T11:36:55.312Z] Copying: 513/1024 [MB] (16 MBps) [2024-12-10T11:36:56.248Z] Copying: 530/1024 [MB] (16 MBps) [2024-12-10T11:36:57.185Z] Copying: 546/1024 [MB] (16 MBps) [2024-12-10T11:36:58.122Z] Copying: 562/1024 [MB] (16 MBps) [2024-12-10T11:36:59.059Z] Copying: 579/1024 [MB] (16 MBps) [2024-12-10T11:36:59.997Z] Copying: 595/1024 [MB] (16 MBps) [2024-12-10T11:37:00.934Z] Copying: 611/1024 [MB] (16 MBps) [2024-12-10T11:37:02.323Z] Copying: 627/1024 [MB] (16 MBps) [2024-12-10T11:37:02.954Z] Copying: 643/1024 [MB] (15 MBps) [2024-12-10T11:37:04.331Z] Copying: 659/1024 [MB] (15 MBps) [2024-12-10T11:37:05.279Z] Copying: 675/1024 [MB] (15 MBps) [2024-12-10T11:37:06.216Z] Copying: 692/1024 [MB] (16 MBps) [2024-12-10T11:37:07.153Z] Copying: 708/1024 [MB] (16 MBps) [2024-12-10T11:37:08.090Z] Copying: 724/1024 [MB] (16 MBps) [2024-12-10T11:37:09.028Z] Copying: 741/1024 [MB] (16 MBps) [2024-12-10T11:37:09.964Z] Copying: 756/1024 [MB] (15 MBps) [2024-12-10T11:37:11.340Z] Copying: 772/1024 [MB] (15 MBps) [2024-12-10T11:37:11.908Z] Copying: 788/1024 [MB] (16 MBps) [2024-12-10T11:37:13.294Z] Copying: 804/1024 [MB] (16 MBps) [2024-12-10T11:37:14.230Z] Copying: 820/1024 [MB] (16 MBps) [2024-12-10T11:37:15.167Z] Copying: 837/1024 [MB] (16 MBps) [2024-12-10T11:37:16.103Z] Copying: 853/1024 [MB] (15 MBps) [2024-12-10T11:37:17.040Z] Copying: 869/1024 [MB] (16 MBps) [2024-12-10T11:37:17.977Z] Copying: 886/1024 [MB] (16 MBps) [2024-12-10T11:37:18.915Z] Copying: 903/1024 [MB] (16 MBps) [2024-12-10T11:37:20.292Z] Copying: 920/1024 [MB] (16 MBps) [2024-12-10T11:37:21.229Z] Copying: 936/1024 [MB] (16 MBps) [2024-12-10T11:37:22.165Z] Copying: 952/1024 [MB] (16 MBps) [2024-12-10T11:37:23.103Z] Copying: 969/1024 [MB] (16 MBps) [2024-12-10T11:37:24.065Z] Copying: 985/1024 [MB] (16 MBps) [2024-12-10T11:37:25.002Z] Copying: 1001/1024 [MB] (16 MBps) [2024-12-10T11:37:25.576Z] Copying: 1018/1024 [MB] (16 MBps) [2024-12-10T11:37:26.514Z] Copying: 1024/1024 [MB] (average 16 MBps) 00:29:59.400 00:29:59.400 11:37:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:29:59.400 11:37:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:29:59.659 11:37:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:59.918 [2024-12-10 11:37:26.867077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.918 [2024-12-10 11:37:26.867128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:59.918 [2024-12-10 11:37:26.867143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:59.918 [2024-12-10 11:37:26.867156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.918 [2024-12-10 11:37:26.867183] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:59.918 [2024-12-10 11:37:26.871494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.918 [2024-12-10 11:37:26.871531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:59.918 [2024-12-10 11:37:26.871545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.297 ms 00:29:59.918 [2024-12-10 11:37:26.871555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.918 [2024-12-10 11:37:26.873852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.918 [2024-12-10 11:37:26.873894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:59.918 [2024-12-10 11:37:26.873910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.267 ms 00:29:59.918 [2024-12-10 11:37:26.873943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.918 [2024-12-10 11:37:26.892611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.918 [2024-12-10 11:37:26.892654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:59.918 [2024-12-10 11:37:26.892687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.667 ms 00:29:59.918 [2024-12-10 11:37:26.892698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.918 [2024-12-10 11:37:26.897686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.918 [2024-12-10 11:37:26.897723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:59.918 [2024-12-10 11:37:26.897738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.953 ms 00:29:59.918 [2024-12-10 11:37:26.897749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.918 [2024-12-10 11:37:26.934286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.918 [2024-12-10 11:37:26.934328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:59.918 [2024-12-10 11:37:26.934344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.477 ms 00:29:59.918 [2024-12-10 11:37:26.934362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.918 [2024-12-10 11:37:26.956100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.918 [2024-12-10 11:37:26.956139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:59.918 [2024-12-10 11:37:26.956174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.726 ms 00:29:59.918 [2024-12-10 11:37:26.956184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.918 [2024-12-10 11:37:26.956336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.918 [2024-12-10 11:37:26.956351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:59.918 [2024-12-10 11:37:26.956364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:29:59.918 [2024-12-10 11:37:26.956375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.918 [2024-12-10 11:37:26.992290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.918 [2024-12-10 11:37:26.992328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:59.918 [2024-12-10 11:37:26.992360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.951 ms 00:29:59.918 [2024-12-10 11:37:26.992369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.918 [2024-12-10 11:37:27.027517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.918 [2024-12-10 11:37:27.027558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:59.918 [2024-12-10 11:37:27.027575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.162 ms 00:29:59.918 [2024-12-10 11:37:27.027585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.179 [2024-12-10 11:37:27.062347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.179 [2024-12-10 11:37:27.062386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:00.179 [2024-12-10 11:37:27.062402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.745 ms 00:30:00.179 [2024-12-10 11:37:27.062413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.179 [2024-12-10 11:37:27.097075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.179 [2024-12-10 11:37:27.097113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:00.179 [2024-12-10 11:37:27.097145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.622 ms 00:30:00.179 [2024-12-10 11:37:27.097165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.179 [2024-12-10 11:37:27.097206] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:00.179 [2024-12-10 11:37:27.097223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:00.179 [2024-12-10 11:37:27.097743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.097998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:00.180 [2024-12-10 11:37:27.098498] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:00.180 [2024-12-10 11:37:27.098510] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d171327a-a22c-46d0-a605-b88a158f0097 00:30:00.180 [2024-12-10 11:37:27.098521] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:00.180 [2024-12-10 11:37:27.098536] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:00.180 [2024-12-10 11:37:27.098555] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:00.180 [2024-12-10 11:37:27.098567] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:00.180 [2024-12-10 11:37:27.098577] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:00.180 [2024-12-10 11:37:27.098589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:00.180 [2024-12-10 11:37:27.098600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:00.180 [2024-12-10 11:37:27.098611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:00.180 [2024-12-10 11:37:27.098620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:00.180 [2024-12-10 11:37:27.098632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.180 [2024-12-10 11:37:27.098642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:00.180 [2024-12-10 11:37:27.098654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:30:00.180 [2024-12-10 11:37:27.098664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.180 [2024-12-10 11:37:27.117646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.180 [2024-12-10 11:37:27.117682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:00.180 [2024-12-10 11:37:27.117713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.954 ms 00:30:00.180 [2024-12-10 11:37:27.117723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.180 [2024-12-10 11:37:27.118299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.180 [2024-12-10 11:37:27.118318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:00.180 [2024-12-10 11:37:27.118332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:30:00.180 [2024-12-10 11:37:27.118342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.180 [2024-12-10 11:37:27.179859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.180 [2024-12-10 11:37:27.179898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:00.180 [2024-12-10 11:37:27.179913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.180 [2024-12-10 11:37:27.179929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.180 [2024-12-10 11:37:27.180003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.180 [2024-12-10 11:37:27.180019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:00.180 [2024-12-10 11:37:27.180032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.180 [2024-12-10 11:37:27.180043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.180 [2024-12-10 11:37:27.180123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.180 [2024-12-10 11:37:27.180140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:00.180 [2024-12-10 11:37:27.180154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.180 [2024-12-10 11:37:27.180164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.180 [2024-12-10 11:37:27.180188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.180 [2024-12-10 11:37:27.180198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:00.180 [2024-12-10 11:37:27.180211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.180 [2024-12-10 11:37:27.180221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.440 [2024-12-10 11:37:27.300547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.440 [2024-12-10 11:37:27.300597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:00.440 [2024-12-10 11:37:27.300614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.440 [2024-12-10 11:37:27.300625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.440 [2024-12-10 11:37:27.395634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.440 [2024-12-10 11:37:27.395684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:00.440 [2024-12-10 11:37:27.395701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.440 [2024-12-10 11:37:27.395711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.440 [2024-12-10 11:37:27.395820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.440 [2024-12-10 11:37:27.395832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:00.440 [2024-12-10 11:37:27.395857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.440 [2024-12-10 11:37:27.395867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.440 [2024-12-10 11:37:27.395938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.440 [2024-12-10 11:37:27.395951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:00.440 [2024-12-10 11:37:27.395980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.440 [2024-12-10 11:37:27.395990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.440 [2024-12-10 11:37:27.396132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.440 [2024-12-10 11:37:27.396146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:00.440 [2024-12-10 11:37:27.396165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.440 [2024-12-10 11:37:27.396178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.440 [2024-12-10 11:37:27.396224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.440 [2024-12-10 11:37:27.396237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:00.440 [2024-12-10 11:37:27.396250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.440 [2024-12-10 11:37:27.396261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.440 [2024-12-10 11:37:27.396303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.440 [2024-12-10 11:37:27.396316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:00.440 [2024-12-10 11:37:27.396328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.440 [2024-12-10 11:37:27.396341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.440 [2024-12-10 11:37:27.396392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.440 [2024-12-10 11:37:27.396405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:00.440 [2024-12-10 11:37:27.396418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.440 [2024-12-10 11:37:27.396429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.440 [2024-12-10 11:37:27.396576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 530.301 ms, result 0 00:30:00.440 true 00:30:00.440 11:37:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81407 00:30:00.440 11:37:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81407 00:30:00.440 11:37:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:30:00.440 [2024-12-10 11:37:27.524940] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:30:00.441 [2024-12-10 11:37:27.525071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82316 ] 00:30:00.700 [2024-12-10 11:37:27.704500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.700 [2024-12-10 11:37:27.809076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.112  [2024-12-10T11:37:30.164Z] Copying: 215/1024 [MB] (215 MBps) [2024-12-10T11:37:31.543Z] Copying: 433/1024 [MB] (217 MBps) [2024-12-10T11:37:32.482Z] Copying: 650/1024 [MB] (217 MBps) [2024-12-10T11:37:33.050Z] Copying: 864/1024 [MB] (213 MBps) [2024-12-10T11:37:33.987Z] Copying: 1024/1024 [MB] (average 215 MBps) 00:30:06.873 00:30:07.131 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81407 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:30:07.131 11:37:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:07.131 [2024-12-10 11:37:34.072289] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:30:07.131 [2024-12-10 11:37:34.072421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82380 ] 00:30:07.389 [2024-12-10 11:37:34.248987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.389 [2024-12-10 11:37:34.355055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.648 [2024-12-10 11:37:34.715588] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:07.648 [2024-12-10 11:37:34.715656] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:07.908 [2024-12-10 11:37:34.781242] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:07.908 [2024-12-10 11:37:34.781549] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:07.908 [2024-12-10 11:37:34.781880] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:08.168 [2024-12-10 11:37:35.100153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.168 [2024-12-10 11:37:35.100199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:08.168 [2024-12-10 11:37:35.100215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:08.168 [2024-12-10 11:37:35.100228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.168 [2024-12-10 11:37:35.100290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.168 [2024-12-10 11:37:35.100301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:08.168 [2024-12-10 11:37:35.100313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:08.168 [2024-12-10 11:37:35.100323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.168 [2024-12-10 11:37:35.100344] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:08.168 [2024-12-10 11:37:35.101361] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:08.168 [2024-12-10 11:37:35.101390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.168 [2024-12-10 11:37:35.101402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:08.168 [2024-12-10 11:37:35.101413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:30:08.168 [2024-12-10 11:37:35.101423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.168 [2024-12-10 11:37:35.102876] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:08.168 [2024-12-10 11:37:35.121283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.168 [2024-12-10 11:37:35.121320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:08.168 [2024-12-10 11:37:35.121334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.437 ms 00:30:08.168 [2024-12-10 11:37:35.121352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.168 [2024-12-10 11:37:35.121429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.168 [2024-12-10 11:37:35.121442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:08.168 [2024-12-10 11:37:35.121452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:30:08.168 [2024-12-10 11:37:35.121462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.168 [2024-12-10 11:37:35.128321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.168 [2024-12-10 11:37:35.128348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:08.168 [2024-12-10 11:37:35.128359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.792 ms 00:30:08.168 [2024-12-10 11:37:35.128370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.168 [2024-12-10 11:37:35.128462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.168 [2024-12-10 11:37:35.128475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:08.168 [2024-12-10 11:37:35.128485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:08.168 [2024-12-10 11:37:35.128495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.168 [2024-12-10 11:37:35.128537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.168 [2024-12-10 11:37:35.128549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:08.168 [2024-12-10 11:37:35.128559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:08.168 [2024-12-10 11:37:35.128568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.168 [2024-12-10 11:37:35.128592] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:08.168 [2024-12-10 11:37:35.133090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.168 [2024-12-10 11:37:35.133121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:08.168 [2024-12-10 11:37:35.133132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.511 ms 00:30:08.168 [2024-12-10 11:37:35.133157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.168 [2024-12-10 11:37:35.133190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.168 [2024-12-10 11:37:35.133200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:08.168 [2024-12-10 11:37:35.133211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:08.168 [2024-12-10 11:37:35.133221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.168 [2024-12-10 11:37:35.133274] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:08.168 [2024-12-10 11:37:35.133299] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:08.168 [2024-12-10 11:37:35.133333] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:08.168 [2024-12-10 11:37:35.133350] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:08.168 [2024-12-10 11:37:35.133452] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:08.168 [2024-12-10 11:37:35.133466] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:08.168 [2024-12-10 11:37:35.133478] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:08.168 [2024-12-10 11:37:35.133495] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:08.168 [2024-12-10 11:37:35.133516] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:08.168 [2024-12-10 11:37:35.133528] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:08.168 [2024-12-10 11:37:35.133538] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:08.168 [2024-12-10 11:37:35.133548] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:08.168 [2024-12-10 11:37:35.133557] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:08.168 [2024-12-10 11:37:35.133567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.168 [2024-12-10 11:37:35.133577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:08.168 [2024-12-10 11:37:35.133588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:30:08.168 [2024-12-10 11:37:35.133597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.168 [2024-12-10 11:37:35.133668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.168 [2024-12-10 11:37:35.133681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:08.168 [2024-12-10 11:37:35.133691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:08.168 [2024-12-10 11:37:35.133702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.168 [2024-12-10 11:37:35.133787] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:08.168 [2024-12-10 11:37:35.133801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:08.168 [2024-12-10 11:37:35.133812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:08.168 [2024-12-10 11:37:35.133822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.168 [2024-12-10 11:37:35.133832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:08.169 [2024-12-10 11:37:35.133841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:08.169 [2024-12-10 11:37:35.133851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:08.169 [2024-12-10 11:37:35.133860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:08.169 [2024-12-10 11:37:35.133870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:08.169 [2024-12-10 11:37:35.133891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:08.169 [2024-12-10 11:37:35.133901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:08.169 [2024-12-10 11:37:35.133911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:08.169 [2024-12-10 11:37:35.133936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:08.169 [2024-12-10 11:37:35.133945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:08.169 [2024-12-10 11:37:35.133955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:08.169 [2024-12-10 11:37:35.133964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.169 [2024-12-10 11:37:35.133973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:08.169 [2024-12-10 11:37:35.133982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:08.169 [2024-12-10 11:37:35.133991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.169 [2024-12-10 11:37:35.134001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:08.169 [2024-12-10 11:37:35.134010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:08.169 [2024-12-10 11:37:35.134019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.169 [2024-12-10 11:37:35.134027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:08.169 [2024-12-10 11:37:35.134036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:08.169 [2024-12-10 11:37:35.134045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.169 [2024-12-10 11:37:35.134054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:08.169 [2024-12-10 11:37:35.134062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:08.169 [2024-12-10 11:37:35.134071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.169 [2024-12-10 11:37:35.134080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:08.169 [2024-12-10 11:37:35.134088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:08.169 [2024-12-10 11:37:35.134097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.169 [2024-12-10 11:37:35.134106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:08.169 [2024-12-10 11:37:35.134115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:08.169 [2024-12-10 11:37:35.134123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:08.169 [2024-12-10 11:37:35.134132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:08.169 [2024-12-10 11:37:35.134141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:08.169 [2024-12-10 11:37:35.134149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:08.169 [2024-12-10 11:37:35.134158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:08.169 [2024-12-10 11:37:35.134167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:08.169 [2024-12-10 11:37:35.134175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.169 [2024-12-10 11:37:35.134184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:08.169 [2024-12-10 11:37:35.134193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:08.169 [2024-12-10 11:37:35.134203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.169 [2024-12-10 11:37:35.134212] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:08.169 [2024-12-10 11:37:35.134221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:08.169 [2024-12-10 11:37:35.134235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:08.169 [2024-12-10 11:37:35.134244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.169 [2024-12-10 11:37:35.134254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:08.169 [2024-12-10 11:37:35.134263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:08.169 [2024-12-10 11:37:35.134272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:08.169 [2024-12-10 11:37:35.134281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:08.169 [2024-12-10 11:37:35.134290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:08.169 [2024-12-10 11:37:35.134299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:08.169 [2024-12-10 11:37:35.134310] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:08.169 [2024-12-10 11:37:35.134322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.169 [2024-12-10 11:37:35.134333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:08.169 [2024-12-10 11:37:35.134343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:08.169 [2024-12-10 11:37:35.134353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:08.169 [2024-12-10 11:37:35.134364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:08.169 [2024-12-10 11:37:35.134375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:08.169 [2024-12-10 11:37:35.134385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:08.169 [2024-12-10 11:37:35.134395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:08.169 [2024-12-10 11:37:35.134406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:08.169 [2024-12-10 11:37:35.134415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:08.169 [2024-12-10 11:37:35.134426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:08.169 [2024-12-10 11:37:35.134436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:08.169 [2024-12-10 11:37:35.134447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:08.169 [2024-12-10 11:37:35.134457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:08.169 [2024-12-10 11:37:35.134467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:08.169 [2024-12-10 11:37:35.134476] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:08.169 [2024-12-10 11:37:35.134487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.169 [2024-12-10 11:37:35.134498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:08.169 [2024-12-10 11:37:35.134508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:08.169 [2024-12-10 11:37:35.134518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:08.169 [2024-12-10 11:37:35.134531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:08.169 [2024-12-10 11:37:35.134541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.169 [2024-12-10 11:37:35.134551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:08.169 [2024-12-10 11:37:35.134561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:30:08.169 [2024-12-10 11:37:35.134570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.169 [2024-12-10 11:37:35.172467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.169 [2024-12-10 11:37:35.172508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:08.169 [2024-12-10 11:37:35.172538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.908 ms 00:30:08.169 [2024-12-10 11:37:35.172549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.169 [2024-12-10 11:37:35.172629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.169 [2024-12-10 11:37:35.172641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:08.169 [2024-12-10 11:37:35.172651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:30:08.169 [2024-12-10 11:37:35.172661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.169 [2024-12-10 11:37:35.226350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.169 [2024-12-10 11:37:35.226397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:08.169 [2024-12-10 11:37:35.226415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.719 ms 00:30:08.169 [2024-12-10 11:37:35.226425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.169 [2024-12-10 11:37:35.226471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.169 [2024-12-10 11:37:35.226482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:08.169 [2024-12-10 11:37:35.226493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:08.169 [2024-12-10 11:37:35.226502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.169 [2024-12-10 11:37:35.227036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.169 [2024-12-10 11:37:35.227059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:08.169 [2024-12-10 11:37:35.227071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:30:08.169 [2024-12-10 11:37:35.227086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.169 [2024-12-10 11:37:35.227203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.169 [2024-12-10 11:37:35.227216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:08.170 [2024-12-10 11:37:35.227227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:30:08.170 [2024-12-10 11:37:35.227237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.170 [2024-12-10 11:37:35.246495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.170 [2024-12-10 11:37:35.246531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:08.170 [2024-12-10 11:37:35.246545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.269 ms 00:30:08.170 [2024-12-10 11:37:35.246555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.170 [2024-12-10 11:37:35.265283] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:08.170 [2024-12-10 11:37:35.265319] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:08.170 [2024-12-10 11:37:35.265349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.170 [2024-12-10 11:37:35.265361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:08.170 [2024-12-10 11:37:35.265372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.717 ms 00:30:08.170 [2024-12-10 11:37:35.265382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.429 [2024-12-10 11:37:35.295104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.429 [2024-12-10 11:37:35.295150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:08.429 [2024-12-10 11:37:35.295164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.727 ms 00:30:08.429 [2024-12-10 11:37:35.295175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.429 [2024-12-10 11:37:35.314092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.429 [2024-12-10 11:37:35.314134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:08.429 [2024-12-10 11:37:35.314148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.888 ms 00:30:08.429 [2024-12-10 11:37:35.314158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.429 [2024-12-10 11:37:35.332135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.429 [2024-12-10 11:37:35.332173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:08.429 [2024-12-10 11:37:35.332186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.965 ms 00:30:08.429 [2024-12-10 11:37:35.332196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.429 [2024-12-10 11:37:35.332963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.429 [2024-12-10 11:37:35.332994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:08.429 [2024-12-10 11:37:35.333007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:30:08.429 [2024-12-10 11:37:35.333017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.429 [2024-12-10 11:37:35.419297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.429 [2024-12-10 11:37:35.419353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:08.429 [2024-12-10 11:37:35.419382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.397 ms 00:30:08.429 [2024-12-10 11:37:35.419393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.429 [2024-12-10 11:37:35.430774] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:08.429 [2024-12-10 11:37:35.433980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.429 [2024-12-10 11:37:35.434016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:08.429 [2024-12-10 11:37:35.434031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.546 ms 00:30:08.429 [2024-12-10 11:37:35.434047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.429 [2024-12-10 11:37:35.434147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.429 [2024-12-10 11:37:35.434160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:08.429 [2024-12-10 11:37:35.434172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:08.429 [2024-12-10 11:37:35.434183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.429 [2024-12-10 11:37:35.434277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.429 [2024-12-10 11:37:35.434290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:08.429 [2024-12-10 11:37:35.434301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:30:08.429 [2024-12-10 11:37:35.434311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.429 [2024-12-10 11:37:35.434341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.429 [2024-12-10 11:37:35.434352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:08.429 [2024-12-10 11:37:35.434363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:08.429 [2024-12-10 11:37:35.434373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.429 [2024-12-10 11:37:35.434406] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:08.429 [2024-12-10 11:37:35.434417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.429 [2024-12-10 11:37:35.434427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:08.429 [2024-12-10 11:37:35.434437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:08.429 [2024-12-10 11:37:35.434451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.429 [2024-12-10 11:37:35.470666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.429 [2024-12-10 11:37:35.470713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:08.429 [2024-12-10 11:37:35.470728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.252 ms 00:30:08.429 [2024-12-10 11:37:35.470755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.429 [2024-12-10 11:37:35.470829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.429 [2024-12-10 11:37:35.470842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:08.429 [2024-12-10 11:37:35.470853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:08.429 [2024-12-10 11:37:35.470863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.429 [2024-12-10 11:37:35.471995] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 371.979 ms, result 0 00:30:09.377  [2024-12-10T11:37:37.871Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-10T11:37:38.809Z] Copying: 40/1024 [MB] (18 MBps) [2024-12-10T11:37:39.748Z] Copying: 63/1024 [MB] (23 MBps) [2024-12-10T11:37:40.685Z] Copying: 86/1024 [MB] (22 MBps) [2024-12-10T11:37:41.622Z] Copying: 111/1024 [MB] (24 MBps) [2024-12-10T11:37:42.560Z] Copying: 135/1024 [MB] (24 MBps) [2024-12-10T11:37:43.497Z] Copying: 160/1024 [MB] (24 MBps) [2024-12-10T11:37:44.876Z] Copying: 183/1024 [MB] (23 MBps) [2024-12-10T11:37:45.814Z] Copying: 208/1024 [MB] (24 MBps) [2024-12-10T11:37:46.809Z] Copying: 233/1024 [MB] (24 MBps) [2024-12-10T11:37:47.746Z] Copying: 257/1024 [MB] (24 MBps) [2024-12-10T11:37:48.684Z] Copying: 280/1024 [MB] (22 MBps) [2024-12-10T11:37:49.622Z] Copying: 304/1024 [MB] (24 MBps) [2024-12-10T11:37:50.559Z] Copying: 328/1024 [MB] (24 MBps) [2024-12-10T11:37:51.497Z] Copying: 352/1024 [MB] (23 MBps) [2024-12-10T11:37:52.876Z] Copying: 375/1024 [MB] (23 MBps) [2024-12-10T11:37:53.813Z] Copying: 399/1024 [MB] (23 MBps) [2024-12-10T11:37:54.752Z] Copying: 423/1024 [MB] (24 MBps) [2024-12-10T11:37:55.689Z] Copying: 448/1024 [MB] (24 MBps) [2024-12-10T11:37:56.627Z] Copying: 471/1024 [MB] (23 MBps) [2024-12-10T11:37:57.566Z] Copying: 495/1024 [MB] (23 MBps) [2024-12-10T11:37:58.504Z] Copying: 516/1024 [MB] (21 MBps) [2024-12-10T11:37:59.884Z] Copying: 539/1024 [MB] (22 MBps) [2024-12-10T11:38:00.453Z] Copying: 563/1024 [MB] (23 MBps) [2024-12-10T11:38:01.832Z] Copying: 586/1024 [MB] (23 MBps) [2024-12-10T11:38:02.770Z] Copying: 609/1024 [MB] (22 MBps) [2024-12-10T11:38:03.709Z] Copying: 632/1024 [MB] (22 MBps) [2024-12-10T11:38:04.647Z] Copying: 655/1024 [MB] (23 MBps) [2024-12-10T11:38:05.585Z] Copying: 676/1024 [MB] (21 MBps) [2024-12-10T11:38:06.523Z] Copying: 697/1024 [MB] (21 MBps) [2024-12-10T11:38:07.462Z] Copying: 720/1024 [MB] (22 MBps) [2024-12-10T11:38:08.841Z] Copying: 743/1024 [MB] (23 MBps) [2024-12-10T11:38:09.459Z] Copying: 766/1024 [MB] (22 MBps) [2024-12-10T11:38:10.862Z] Copying: 788/1024 [MB] (22 MBps) [2024-12-10T11:38:11.431Z] Copying: 810/1024 [MB] (22 MBps) [2024-12-10T11:38:12.810Z] Copying: 833/1024 [MB] (22 MBps) [2024-12-10T11:38:13.749Z] Copying: 855/1024 [MB] (22 MBps) [2024-12-10T11:38:14.687Z] Copying: 878/1024 [MB] (22 MBps) [2024-12-10T11:38:15.624Z] Copying: 901/1024 [MB] (23 MBps) [2024-12-10T11:38:16.561Z] Copying: 925/1024 [MB] (23 MBps) [2024-12-10T11:38:17.499Z] Copying: 948/1024 [MB] (23 MBps) [2024-12-10T11:38:18.438Z] Copying: 971/1024 [MB] (23 MBps) [2024-12-10T11:38:19.817Z] Copying: 994/1024 [MB] (23 MBps) [2024-12-10T11:38:20.386Z] Copying: 1017/1024 [MB] (22 MBps) [2024-12-10T11:38:20.386Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-10 11:38:20.369118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.272 [2024-12-10 11:38:20.369182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:53.272 [2024-12-10 11:38:20.369198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:53.272 [2024-12-10 11:38:20.369210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.272 [2024-12-10 11:38:20.371763] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:53.272 [2024-12-10 11:38:20.378264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.272 [2024-12-10 11:38:20.378306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:53.272 [2024-12-10 11:38:20.378320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.457 ms 00:30:53.272 [2024-12-10 11:38:20.378336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.533 [2024-12-10 11:38:20.387186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.533 [2024-12-10 11:38:20.387228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:53.533 [2024-12-10 11:38:20.387241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.020 ms 00:30:53.533 [2024-12-10 11:38:20.387251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.533 [2024-12-10 11:38:20.409835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.533 [2024-12-10 11:38:20.409892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:53.533 [2024-12-10 11:38:20.409907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.603 ms 00:30:53.533 [2024-12-10 11:38:20.409928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.533 [2024-12-10 11:38:20.414663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.533 [2024-12-10 11:38:20.414696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:53.533 [2024-12-10 11:38:20.414707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.700 ms 00:30:53.533 [2024-12-10 11:38:20.414718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.533 [2024-12-10 11:38:20.448804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.533 [2024-12-10 11:38:20.448841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:53.533 [2024-12-10 11:38:20.448853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.092 ms 00:30:53.533 [2024-12-10 11:38:20.448879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.533 [2024-12-10 11:38:20.468937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.533 [2024-12-10 11:38:20.468982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:53.533 [2024-12-10 11:38:20.468996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.043 ms 00:30:53.533 [2024-12-10 11:38:20.469008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.533 [2024-12-10 11:38:20.590494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.533 [2024-12-10 11:38:20.590535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:53.533 [2024-12-10 11:38:20.590555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 121.642 ms 00:30:53.533 [2024-12-10 11:38:20.590567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.533 [2024-12-10 11:38:20.625525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.533 [2024-12-10 11:38:20.625559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:53.533 [2024-12-10 11:38:20.625572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.998 ms 00:30:53.533 [2024-12-10 11:38:20.625611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.794 [2024-12-10 11:38:20.660001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.794 [2024-12-10 11:38:20.660038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:53.794 [2024-12-10 11:38:20.660050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.376 ms 00:30:53.794 [2024-12-10 11:38:20.660061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.794 [2024-12-10 11:38:20.693697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.794 [2024-12-10 11:38:20.693733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:53.794 [2024-12-10 11:38:20.693745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.607 ms 00:30:53.794 [2024-12-10 11:38:20.693756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.794 [2024-12-10 11:38:20.727390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.794 [2024-12-10 11:38:20.727426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:53.794 [2024-12-10 11:38:20.727438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.619 ms 00:30:53.794 [2024-12-10 11:38:20.727448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.794 [2024-12-10 11:38:20.727482] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:53.794 [2024-12-10 11:38:20.727496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 101888 / 261120 wr_cnt: 1 state: open 00:30:53.794 [2024-12-10 11:38:20.727508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:53.794 [2024-12-10 11:38:20.727969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.727979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.727989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.727999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:53.795 [2024-12-10 11:38:20.728565] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:53.795 [2024-12-10 11:38:20.728575] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d171327a-a22c-46d0-a605-b88a158f0097 00:30:53.795 [2024-12-10 11:38:20.728602] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 101888 00:30:53.795 [2024-12-10 11:38:20.728612] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 102848 00:30:53.795 [2024-12-10 11:38:20.728622] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 101888 00:30:53.795 [2024-12-10 11:38:20.728632] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:30:53.795 [2024-12-10 11:38:20.728642] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:53.795 [2024-12-10 11:38:20.728652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:53.795 [2024-12-10 11:38:20.728662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:53.795 [2024-12-10 11:38:20.728670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:53.795 [2024-12-10 11:38:20.728679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:53.795 [2024-12-10 11:38:20.728688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.795 [2024-12-10 11:38:20.728699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:53.795 [2024-12-10 11:38:20.728709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.209 ms 00:30:53.795 [2024-12-10 11:38:20.728719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.795 [2024-12-10 11:38:20.747229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.795 [2024-12-10 11:38:20.747264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:53.795 [2024-12-10 11:38:20.747277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.507 ms 00:30:53.795 [2024-12-10 11:38:20.747303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.795 [2024-12-10 11:38:20.747810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.795 [2024-12-10 11:38:20.747828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:53.795 [2024-12-10 11:38:20.747842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:30:53.795 [2024-12-10 11:38:20.747852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.795 [2024-12-10 11:38:20.796591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.795 [2024-12-10 11:38:20.796629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:53.795 [2024-12-10 11:38:20.796640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.795 [2024-12-10 11:38:20.796650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.795 [2024-12-10 11:38:20.796698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.795 [2024-12-10 11:38:20.796709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:53.795 [2024-12-10 11:38:20.796725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.795 [2024-12-10 11:38:20.796734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.795 [2024-12-10 11:38:20.796805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.795 [2024-12-10 11:38:20.796819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:53.795 [2024-12-10 11:38:20.796829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.795 [2024-12-10 11:38:20.796839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.795 [2024-12-10 11:38:20.796854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.795 [2024-12-10 11:38:20.796864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:53.795 [2024-12-10 11:38:20.796874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.795 [2024-12-10 11:38:20.796884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.056 [2024-12-10 11:38:20.913449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.056 [2024-12-10 11:38:20.913501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:54.056 [2024-12-10 11:38:20.913514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.056 [2024-12-10 11:38:20.913531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.056 [2024-12-10 11:38:21.010076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.056 [2024-12-10 11:38:21.010120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:54.056 [2024-12-10 11:38:21.010133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.056 [2024-12-10 11:38:21.010149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.056 [2024-12-10 11:38:21.010226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.056 [2024-12-10 11:38:21.010238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:54.056 [2024-12-10 11:38:21.010248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.056 [2024-12-10 11:38:21.010258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.056 [2024-12-10 11:38:21.010293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.056 [2024-12-10 11:38:21.010304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:54.056 [2024-12-10 11:38:21.010314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.056 [2024-12-10 11:38:21.010324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.056 [2024-12-10 11:38:21.010435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.056 [2024-12-10 11:38:21.010448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:54.056 [2024-12-10 11:38:21.010474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.056 [2024-12-10 11:38:21.010485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.056 [2024-12-10 11:38:21.010518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.056 [2024-12-10 11:38:21.010530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:54.056 [2024-12-10 11:38:21.010540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.056 [2024-12-10 11:38:21.010551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.056 [2024-12-10 11:38:21.010590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.056 [2024-12-10 11:38:21.010601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:54.056 [2024-12-10 11:38:21.010611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.056 [2024-12-10 11:38:21.010621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.056 [2024-12-10 11:38:21.010660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.056 [2024-12-10 11:38:21.010672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:54.056 [2024-12-10 11:38:21.010682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.056 [2024-12-10 11:38:21.010692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.056 [2024-12-10 11:38:21.010808] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 644.340 ms, result 0 00:30:55.436 00:30:55.436 00:30:55.695 11:38:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:57.600 11:38:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:57.600 [2024-12-10 11:38:24.333971] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:30:57.600 [2024-12-10 11:38:24.334104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82888 ] 00:30:57.600 [2024-12-10 11:38:24.518304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.600 [2024-12-10 11:38:24.627045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.859 [2024-12-10 11:38:24.968719] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:57.859 [2024-12-10 11:38:24.968791] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:58.120 [2024-12-10 11:38:25.129751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.120 [2024-12-10 11:38:25.129802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:58.120 [2024-12-10 11:38:25.129817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:58.120 [2024-12-10 11:38:25.129827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.120 [2024-12-10 11:38:25.129872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.120 [2024-12-10 11:38:25.129886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:58.120 [2024-12-10 11:38:25.129896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:58.120 [2024-12-10 11:38:25.129906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.120 [2024-12-10 11:38:25.129939] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:58.120 [2024-12-10 11:38:25.130822] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:58.120 [2024-12-10 11:38:25.130849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.120 [2024-12-10 11:38:25.130860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:58.120 [2024-12-10 11:38:25.130871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:30:58.120 [2024-12-10 11:38:25.130881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.120 [2024-12-10 11:38:25.132366] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:58.120 [2024-12-10 11:38:25.149867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.120 [2024-12-10 11:38:25.149908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:58.120 [2024-12-10 11:38:25.149929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.529 ms 00:30:58.120 [2024-12-10 11:38:25.149940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.120 [2024-12-10 11:38:25.150006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.120 [2024-12-10 11:38:25.150018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:58.120 [2024-12-10 11:38:25.150029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:58.120 [2024-12-10 11:38:25.150038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.120 [2024-12-10 11:38:25.156827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.120 [2024-12-10 11:38:25.156853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:58.120 [2024-12-10 11:38:25.156865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.731 ms 00:30:58.120 [2024-12-10 11:38:25.156878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.120 [2024-12-10 11:38:25.156960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.120 [2024-12-10 11:38:25.156973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:58.120 [2024-12-10 11:38:25.156984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:58.120 [2024-12-10 11:38:25.156993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.120 [2024-12-10 11:38:25.157031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.120 [2024-12-10 11:38:25.157043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:58.120 [2024-12-10 11:38:25.157052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:58.120 [2024-12-10 11:38:25.157062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.120 [2024-12-10 11:38:25.157089] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:58.120 [2024-12-10 11:38:25.161692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.120 [2024-12-10 11:38:25.161724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:58.120 [2024-12-10 11:38:25.161739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.617 ms 00:30:58.120 [2024-12-10 11:38:25.161748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.120 [2024-12-10 11:38:25.161779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.120 [2024-12-10 11:38:25.161790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:58.120 [2024-12-10 11:38:25.161801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:58.120 [2024-12-10 11:38:25.161811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.120 [2024-12-10 11:38:25.161860] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:58.120 [2024-12-10 11:38:25.161885] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:58.120 [2024-12-10 11:38:25.161954] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:58.120 [2024-12-10 11:38:25.161979] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:58.120 [2024-12-10 11:38:25.162080] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:58.120 [2024-12-10 11:38:25.162093] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:58.120 [2024-12-10 11:38:25.162106] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:58.120 [2024-12-10 11:38:25.162119] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:58.120 [2024-12-10 11:38:25.162132] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:58.120 [2024-12-10 11:38:25.162144] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:58.120 [2024-12-10 11:38:25.162155] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:58.120 [2024-12-10 11:38:25.162168] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:58.120 [2024-12-10 11:38:25.162178] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:58.120 [2024-12-10 11:38:25.162190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.120 [2024-12-10 11:38:25.162200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:58.120 [2024-12-10 11:38:25.162211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:30:58.120 [2024-12-10 11:38:25.162220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.120 [2024-12-10 11:38:25.162296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.120 [2024-12-10 11:38:25.162307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:58.120 [2024-12-10 11:38:25.162316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:58.120 [2024-12-10 11:38:25.162326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.120 [2024-12-10 11:38:25.162412] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:58.120 [2024-12-10 11:38:25.162427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:58.120 [2024-12-10 11:38:25.162438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:58.120 [2024-12-10 11:38:25.162448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.120 [2024-12-10 11:38:25.162459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:58.120 [2024-12-10 11:38:25.162469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:58.120 [2024-12-10 11:38:25.162478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:58.120 [2024-12-10 11:38:25.162487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:58.120 [2024-12-10 11:38:25.162496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:58.120 [2024-12-10 11:38:25.162506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:58.120 [2024-12-10 11:38:25.162517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:58.120 [2024-12-10 11:38:25.162526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:58.120 [2024-12-10 11:38:25.162535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:58.120 [2024-12-10 11:38:25.162555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:58.120 [2024-12-10 11:38:25.162565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:58.120 [2024-12-10 11:38:25.162574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.120 [2024-12-10 11:38:25.162583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:58.120 [2024-12-10 11:38:25.162592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:58.120 [2024-12-10 11:38:25.162601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.120 [2024-12-10 11:38:25.162610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:58.120 [2024-12-10 11:38:25.162618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:58.120 [2024-12-10 11:38:25.162627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.120 [2024-12-10 11:38:25.162636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:58.120 [2024-12-10 11:38:25.162645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:58.120 [2024-12-10 11:38:25.162653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.120 [2024-12-10 11:38:25.162663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:58.121 [2024-12-10 11:38:25.162672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:58.121 [2024-12-10 11:38:25.162680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.121 [2024-12-10 11:38:25.162689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:58.121 [2024-12-10 11:38:25.162698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:58.121 [2024-12-10 11:38:25.162707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.121 [2024-12-10 11:38:25.162716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:58.121 [2024-12-10 11:38:25.162724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:58.121 [2024-12-10 11:38:25.162733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:58.121 [2024-12-10 11:38:25.162742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:58.121 [2024-12-10 11:38:25.162751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:58.121 [2024-12-10 11:38:25.162760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:58.121 [2024-12-10 11:38:25.162770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:58.121 [2024-12-10 11:38:25.162778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:58.121 [2024-12-10 11:38:25.162787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.121 [2024-12-10 11:38:25.162796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:58.121 [2024-12-10 11:38:25.162805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:58.121 [2024-12-10 11:38:25.162814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.121 [2024-12-10 11:38:25.162823] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:58.121 [2024-12-10 11:38:25.162832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:58.121 [2024-12-10 11:38:25.162842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:58.121 [2024-12-10 11:38:25.162851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.121 [2024-12-10 11:38:25.162861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:58.121 [2024-12-10 11:38:25.162870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:58.121 [2024-12-10 11:38:25.162879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:58.121 [2024-12-10 11:38:25.162889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:58.121 [2024-12-10 11:38:25.162897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:58.121 [2024-12-10 11:38:25.162906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:58.121 [2024-12-10 11:38:25.162916] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:58.121 [2024-12-10 11:38:25.162939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:58.121 [2024-12-10 11:38:25.162956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:58.121 [2024-12-10 11:38:25.162967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:58.121 [2024-12-10 11:38:25.162977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:58.121 [2024-12-10 11:38:25.162988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:58.121 [2024-12-10 11:38:25.162998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:58.121 [2024-12-10 11:38:25.163008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:58.121 [2024-12-10 11:38:25.163019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:58.121 [2024-12-10 11:38:25.163029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:58.121 [2024-12-10 11:38:25.163039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:58.121 [2024-12-10 11:38:25.163049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:58.121 [2024-12-10 11:38:25.163059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:58.121 [2024-12-10 11:38:25.163069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:58.121 [2024-12-10 11:38:25.163078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:58.121 [2024-12-10 11:38:25.163089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:58.121 [2024-12-10 11:38:25.163098] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:58.121 [2024-12-10 11:38:25.163109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:58.121 [2024-12-10 11:38:25.163119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:58.121 [2024-12-10 11:38:25.163129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:58.121 [2024-12-10 11:38:25.163138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:58.121 [2024-12-10 11:38:25.163149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:58.121 [2024-12-10 11:38:25.163159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.121 [2024-12-10 11:38:25.163171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:58.121 [2024-12-10 11:38:25.163181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:30:58.121 [2024-12-10 11:38:25.163190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.121 [2024-12-10 11:38:25.200296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.121 [2024-12-10 11:38:25.200333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:58.121 [2024-12-10 11:38:25.200346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.121 ms 00:30:58.121 [2024-12-10 11:38:25.200360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.121 [2024-12-10 11:38:25.200429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.121 [2024-12-10 11:38:25.200440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:58.121 [2024-12-10 11:38:25.200450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:58.121 [2024-12-10 11:38:25.200459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.380 [2024-12-10 11:38:25.270036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.380 [2024-12-10 11:38:25.270075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:58.380 [2024-12-10 11:38:25.270088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.635 ms 00:30:58.380 [2024-12-10 11:38:25.270098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.380 [2024-12-10 11:38:25.270137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.380 [2024-12-10 11:38:25.270149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:58.380 [2024-12-10 11:38:25.270163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:58.380 [2024-12-10 11:38:25.270173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.380 [2024-12-10 11:38:25.270675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.380 [2024-12-10 11:38:25.270698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:58.380 [2024-12-10 11:38:25.270709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:30:58.380 [2024-12-10 11:38:25.270719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.380 [2024-12-10 11:38:25.270836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.380 [2024-12-10 11:38:25.270850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:58.380 [2024-12-10 11:38:25.270864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:30:58.380 [2024-12-10 11:38:25.270874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.380 [2024-12-10 11:38:25.289656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.380 [2024-12-10 11:38:25.289694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:58.380 [2024-12-10 11:38:25.289707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.792 ms 00:30:58.380 [2024-12-10 11:38:25.289717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.380 [2024-12-10 11:38:25.308289] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:58.380 [2024-12-10 11:38:25.308335] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:58.380 [2024-12-10 11:38:25.308349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.380 [2024-12-10 11:38:25.308360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:58.380 [2024-12-10 11:38:25.308371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.565 ms 00:30:58.380 [2024-12-10 11:38:25.308380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.380 [2024-12-10 11:38:25.336071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.380 [2024-12-10 11:38:25.336111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:58.380 [2024-12-10 11:38:25.336125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.691 ms 00:30:58.380 [2024-12-10 11:38:25.336135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.380 [2024-12-10 11:38:25.353041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.380 [2024-12-10 11:38:25.353080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:58.380 [2024-12-10 11:38:25.353093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.885 ms 00:30:58.380 [2024-12-10 11:38:25.353118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.380 [2024-12-10 11:38:25.370161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.380 [2024-12-10 11:38:25.370200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:58.381 [2024-12-10 11:38:25.370213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.031 ms 00:30:58.381 [2024-12-10 11:38:25.370223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.381 [2024-12-10 11:38:25.370962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.381 [2024-12-10 11:38:25.370995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:58.381 [2024-12-10 11:38:25.371011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:30:58.381 [2024-12-10 11:38:25.371021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.381 [2024-12-10 11:38:25.454252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.381 [2024-12-10 11:38:25.454307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:58.381 [2024-12-10 11:38:25.454357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.343 ms 00:30:58.381 [2024-12-10 11:38:25.454368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.381 [2024-12-10 11:38:25.464385] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:58.381 [2024-12-10 11:38:25.466660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.381 [2024-12-10 11:38:25.466693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:58.381 [2024-12-10 11:38:25.466706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.266 ms 00:30:58.381 [2024-12-10 11:38:25.466715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.381 [2024-12-10 11:38:25.466789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.381 [2024-12-10 11:38:25.466803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:58.381 [2024-12-10 11:38:25.466818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:58.381 [2024-12-10 11:38:25.466828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.381 [2024-12-10 11:38:25.468275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.381 [2024-12-10 11:38:25.468315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:58.381 [2024-12-10 11:38:25.468327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:30:58.381 [2024-12-10 11:38:25.468337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.381 [2024-12-10 11:38:25.468366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.381 [2024-12-10 11:38:25.468378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:58.381 [2024-12-10 11:38:25.468388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:58.381 [2024-12-10 11:38:25.468398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.381 [2024-12-10 11:38:25.468443] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:58.381 [2024-12-10 11:38:25.468465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.381 [2024-12-10 11:38:25.468475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:58.381 [2024-12-10 11:38:25.468487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:58.381 [2024-12-10 11:38:25.468496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.640 [2024-12-10 11:38:25.502700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.640 [2024-12-10 11:38:25.502751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:58.640 [2024-12-10 11:38:25.502771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.241 ms 00:30:58.640 [2024-12-10 11:38:25.502781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.640 [2024-12-10 11:38:25.502848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.640 [2024-12-10 11:38:25.502860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:58.640 [2024-12-10 11:38:25.502870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:58.640 [2024-12-10 11:38:25.502880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.640 [2024-12-10 11:38:25.503975] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 374.373 ms, result 0 00:31:00.018  [2024-12-10T11:38:28.070Z] Copying: 1292/1048576 [kB] (1292 kBps) [2024-12-10T11:38:29.008Z] Copying: 9172/1048576 [kB] (7880 kBps) [2024-12-10T11:38:29.946Z] Copying: 42/1024 [MB] (33 MBps) [2024-12-10T11:38:30.884Z] Copying: 74/1024 [MB] (32 MBps) [2024-12-10T11:38:31.822Z] Copying: 105/1024 [MB] (31 MBps) [2024-12-10T11:38:32.791Z] Copying: 137/1024 [MB] (32 MBps) [2024-12-10T11:38:33.729Z] Copying: 169/1024 [MB] (31 MBps) [2024-12-10T11:38:35.108Z] Copying: 202/1024 [MB] (32 MBps) [2024-12-10T11:38:36.046Z] Copying: 235/1024 [MB] (32 MBps) [2024-12-10T11:38:36.984Z] Copying: 267/1024 [MB] (32 MBps) [2024-12-10T11:38:37.922Z] Copying: 300/1024 [MB] (32 MBps) [2024-12-10T11:38:38.859Z] Copying: 333/1024 [MB] (33 MBps) [2024-12-10T11:38:39.796Z] Copying: 367/1024 [MB] (33 MBps) [2024-12-10T11:38:40.733Z] Copying: 399/1024 [MB] (32 MBps) [2024-12-10T11:38:42.113Z] Copying: 431/1024 [MB] (32 MBps) [2024-12-10T11:38:43.051Z] Copying: 464/1024 [MB] (32 MBps) [2024-12-10T11:38:43.989Z] Copying: 496/1024 [MB] (32 MBps) [2024-12-10T11:38:44.929Z] Copying: 528/1024 [MB] (32 MBps) [2024-12-10T11:38:45.866Z] Copying: 561/1024 [MB] (32 MBps) [2024-12-10T11:38:46.804Z] Copying: 593/1024 [MB] (32 MBps) [2024-12-10T11:38:47.742Z] Copying: 625/1024 [MB] (31 MBps) [2024-12-10T11:38:48.680Z] Copying: 657/1024 [MB] (32 MBps) [2024-12-10T11:38:50.058Z] Copying: 689/1024 [MB] (32 MBps) [2024-12-10T11:38:50.996Z] Copying: 722/1024 [MB] (32 MBps) [2024-12-10T11:38:51.934Z] Copying: 754/1024 [MB] (31 MBps) [2024-12-10T11:38:52.872Z] Copying: 785/1024 [MB] (31 MBps) [2024-12-10T11:38:53.809Z] Copying: 818/1024 [MB] (32 MBps) [2024-12-10T11:38:54.747Z] Copying: 850/1024 [MB] (32 MBps) [2024-12-10T11:38:55.774Z] Copying: 883/1024 [MB] (32 MBps) [2024-12-10T11:38:56.712Z] Copying: 916/1024 [MB] (33 MBps) [2024-12-10T11:38:58.092Z] Copying: 948/1024 [MB] (31 MBps) [2024-12-10T11:38:58.659Z] Copying: 980/1024 [MB] (32 MBps) [2024-12-10T11:38:59.228Z] Copying: 1013/1024 [MB] (32 MBps) [2024-12-10T11:38:59.228Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-12-10 11:38:59.028651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.114 [2024-12-10 11:38:59.028732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:32.114 [2024-12-10 11:38:59.028755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:32.114 [2024-12-10 11:38:59.028771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.114 [2024-12-10 11:38:59.028804] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:32.114 [2024-12-10 11:38:59.037622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.114 [2024-12-10 11:38:59.037678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:32.114 [2024-12-10 11:38:59.037702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.806 ms 00:31:32.114 [2024-12-10 11:38:59.037721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.114 [2024-12-10 11:38:59.038099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.114 [2024-12-10 11:38:59.038153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:32.114 [2024-12-10 11:38:59.038174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:31:32.114 [2024-12-10 11:38:59.038192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.114 [2024-12-10 11:38:59.052806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.114 [2024-12-10 11:38:59.052854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:32.114 [2024-12-10 11:38:59.052869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.606 ms 00:31:32.114 [2024-12-10 11:38:59.052880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.114 [2024-12-10 11:38:59.057661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.114 [2024-12-10 11:38:59.057698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:32.114 [2024-12-10 11:38:59.057716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.743 ms 00:31:32.114 [2024-12-10 11:38:59.057726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.114 [2024-12-10 11:38:59.092505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.114 [2024-12-10 11:38:59.092556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:32.114 [2024-12-10 11:38:59.092569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.784 ms 00:31:32.114 [2024-12-10 11:38:59.092578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.114 [2024-12-10 11:38:59.112517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.114 [2024-12-10 11:38:59.112556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:32.114 [2024-12-10 11:38:59.112569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.919 ms 00:31:32.114 [2024-12-10 11:38:59.112578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.114 [2024-12-10 11:38:59.114720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.114 [2024-12-10 11:38:59.114759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:32.114 [2024-12-10 11:38:59.114771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.089 ms 00:31:32.114 [2024-12-10 11:38:59.114787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.114 [2024-12-10 11:38:59.148397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.114 [2024-12-10 11:38:59.148429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:32.114 [2024-12-10 11:38:59.148442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.647 ms 00:31:32.114 [2024-12-10 11:38:59.148450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.114 [2024-12-10 11:38:59.183311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.114 [2024-12-10 11:38:59.183351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:32.114 [2024-12-10 11:38:59.183364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.866 ms 00:31:32.114 [2024-12-10 11:38:59.183373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.114 [2024-12-10 11:38:59.218394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.114 [2024-12-10 11:38:59.218431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:32.114 [2024-12-10 11:38:59.218443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.042 ms 00:31:32.114 [2024-12-10 11:38:59.218452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.375 [2024-12-10 11:38:59.252436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.375 [2024-12-10 11:38:59.252472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:32.375 [2024-12-10 11:38:59.252485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.953 ms 00:31:32.375 [2024-12-10 11:38:59.252494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.375 [2024-12-10 11:38:59.252544] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:32.375 [2024-12-10 11:38:59.252559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:32.375 [2024-12-10 11:38:59.252571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:32.375 [2024-12-10 11:38:59.252582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.252996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:32.375 [2024-12-10 11:38:59.253356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:32.376 [2024-12-10 11:38:59.253637] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:32.376 [2024-12-10 11:38:59.253647] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d171327a-a22c-46d0-a605-b88a158f0097 00:31:32.376 [2024-12-10 11:38:59.253658] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:32.376 [2024-12-10 11:38:59.253668] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 162752 00:31:32.376 [2024-12-10 11:38:59.253682] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 160768 00:31:32.376 [2024-12-10 11:38:59.253692] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0123 00:31:32.376 [2024-12-10 11:38:59.253702] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:32.376 [2024-12-10 11:38:59.253721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:32.376 [2024-12-10 11:38:59.253732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:32.376 [2024-12-10 11:38:59.253741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:32.376 [2024-12-10 11:38:59.253749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:32.376 [2024-12-10 11:38:59.253759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.376 [2024-12-10 11:38:59.253769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:32.376 [2024-12-10 11:38:59.253779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:31:32.376 [2024-12-10 11:38:59.253789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.376 [2024-12-10 11:38:59.273039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.376 [2024-12-10 11:38:59.273073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:32.376 [2024-12-10 11:38:59.273101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.248 ms 00:31:32.376 [2024-12-10 11:38:59.273110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.376 [2024-12-10 11:38:59.273671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.376 [2024-12-10 11:38:59.273704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:32.376 [2024-12-10 11:38:59.273715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:31:32.376 [2024-12-10 11:38:59.273725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.376 [2024-12-10 11:38:59.320524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.376 [2024-12-10 11:38:59.320560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:32.376 [2024-12-10 11:38:59.320572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.376 [2024-12-10 11:38:59.320582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.376 [2024-12-10 11:38:59.320645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.376 [2024-12-10 11:38:59.320656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:32.376 [2024-12-10 11:38:59.320666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.376 [2024-12-10 11:38:59.320676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.376 [2024-12-10 11:38:59.320744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.376 [2024-12-10 11:38:59.320757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:32.376 [2024-12-10 11:38:59.320767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.376 [2024-12-10 11:38:59.320777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.376 [2024-12-10 11:38:59.320793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.376 [2024-12-10 11:38:59.320802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:32.376 [2024-12-10 11:38:59.320812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.376 [2024-12-10 11:38:59.320821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.376 [2024-12-10 11:38:59.434810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.376 [2024-12-10 11:38:59.434858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:32.376 [2024-12-10 11:38:59.434889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.376 [2024-12-10 11:38:59.434898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.635 [2024-12-10 11:38:59.528828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.635 [2024-12-10 11:38:59.528878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:32.636 [2024-12-10 11:38:59.528891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.636 [2024-12-10 11:38:59.528901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.636 [2024-12-10 11:38:59.529029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.636 [2024-12-10 11:38:59.529048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:32.636 [2024-12-10 11:38:59.529059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.636 [2024-12-10 11:38:59.529069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.636 [2024-12-10 11:38:59.529107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.636 [2024-12-10 11:38:59.529117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:32.636 [2024-12-10 11:38:59.529127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.636 [2024-12-10 11:38:59.529136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.636 [2024-12-10 11:38:59.529247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.636 [2024-12-10 11:38:59.529260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:32.636 [2024-12-10 11:38:59.529275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.636 [2024-12-10 11:38:59.529300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.636 [2024-12-10 11:38:59.529340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.636 [2024-12-10 11:38:59.529352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:32.636 [2024-12-10 11:38:59.529363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.636 [2024-12-10 11:38:59.529372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.636 [2024-12-10 11:38:59.529413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.636 [2024-12-10 11:38:59.529424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:32.636 [2024-12-10 11:38:59.529439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.636 [2024-12-10 11:38:59.529449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.636 [2024-12-10 11:38:59.529490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.636 [2024-12-10 11:38:59.529501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:32.636 [2024-12-10 11:38:59.529511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.636 [2024-12-10 11:38:59.529521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.636 [2024-12-10 11:38:59.529661] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 501.811 ms, result 0 00:31:33.573 00:31:33.573 00:31:33.574 11:39:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:35.480 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:35.480 11:39:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:35.480 [2024-12-10 11:39:02.282723] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:31:35.480 [2024-12-10 11:39:02.282982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83264 ] 00:31:35.480 [2024-12-10 11:39:02.465573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.480 [2024-12-10 11:39:02.575550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.050 [2024-12-10 11:39:02.916501] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:36.050 [2024-12-10 11:39:02.916590] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:36.050 [2024-12-10 11:39:03.077655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.050 [2024-12-10 11:39:03.077709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:36.050 [2024-12-10 11:39:03.077725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:36.050 [2024-12-10 11:39:03.077735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.050 [2024-12-10 11:39:03.077796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.050 [2024-12-10 11:39:03.077812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:36.050 [2024-12-10 11:39:03.077824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:36.050 [2024-12-10 11:39:03.077834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.050 [2024-12-10 11:39:03.077855] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:36.050 [2024-12-10 11:39:03.078843] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:36.050 [2024-12-10 11:39:03.078870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.050 [2024-12-10 11:39:03.078880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:36.050 [2024-12-10 11:39:03.078891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:31:36.050 [2024-12-10 11:39:03.078900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.050 [2024-12-10 11:39:03.080374] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:36.050 [2024-12-10 11:39:03.098982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.050 [2024-12-10 11:39:03.099023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:36.050 [2024-12-10 11:39:03.099037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.639 ms 00:31:36.050 [2024-12-10 11:39:03.099048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.050 [2024-12-10 11:39:03.099132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.050 [2024-12-10 11:39:03.099144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:36.050 [2024-12-10 11:39:03.099155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:31:36.050 [2024-12-10 11:39:03.099165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.050 [2024-12-10 11:39:03.105870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.050 [2024-12-10 11:39:03.105901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:36.050 [2024-12-10 11:39:03.105935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.645 ms 00:31:36.050 [2024-12-10 11:39:03.105950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.050 [2024-12-10 11:39:03.106027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.050 [2024-12-10 11:39:03.106041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:36.050 [2024-12-10 11:39:03.106051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:31:36.050 [2024-12-10 11:39:03.106061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.050 [2024-12-10 11:39:03.106100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.050 [2024-12-10 11:39:03.106112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:36.050 [2024-12-10 11:39:03.106122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:36.050 [2024-12-10 11:39:03.106132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.050 [2024-12-10 11:39:03.106158] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:36.050 [2024-12-10 11:39:03.111021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.050 [2024-12-10 11:39:03.111056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:36.050 [2024-12-10 11:39:03.111072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.875 ms 00:31:36.050 [2024-12-10 11:39:03.111097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.050 [2024-12-10 11:39:03.111131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.050 [2024-12-10 11:39:03.111142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:36.050 [2024-12-10 11:39:03.111153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:36.050 [2024-12-10 11:39:03.111164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.050 [2024-12-10 11:39:03.111217] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:36.050 [2024-12-10 11:39:03.111242] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:36.050 [2024-12-10 11:39:03.111277] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:36.050 [2024-12-10 11:39:03.111297] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:36.050 [2024-12-10 11:39:03.111388] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:36.050 [2024-12-10 11:39:03.111401] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:36.051 [2024-12-10 11:39:03.111414] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:36.051 [2024-12-10 11:39:03.111427] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:36.051 [2024-12-10 11:39:03.111439] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:36.051 [2024-12-10 11:39:03.111450] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:36.051 [2024-12-10 11:39:03.111460] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:36.051 [2024-12-10 11:39:03.111474] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:36.051 [2024-12-10 11:39:03.111484] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:36.051 [2024-12-10 11:39:03.111494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.051 [2024-12-10 11:39:03.111504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:36.051 [2024-12-10 11:39:03.111514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:31:36.051 [2024-12-10 11:39:03.111524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.051 [2024-12-10 11:39:03.111595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.051 [2024-12-10 11:39:03.111606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:36.051 [2024-12-10 11:39:03.111616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:36.051 [2024-12-10 11:39:03.111625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.051 [2024-12-10 11:39:03.111714] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:36.051 [2024-12-10 11:39:03.111727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:36.051 [2024-12-10 11:39:03.111738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:36.051 [2024-12-10 11:39:03.111749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.051 [2024-12-10 11:39:03.111760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:36.051 [2024-12-10 11:39:03.111769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:36.051 [2024-12-10 11:39:03.111778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:36.051 [2024-12-10 11:39:03.111787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:36.051 [2024-12-10 11:39:03.111797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:36.051 [2024-12-10 11:39:03.111806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:36.051 [2024-12-10 11:39:03.111816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:36.051 [2024-12-10 11:39:03.111825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:36.051 [2024-12-10 11:39:03.111835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:36.051 [2024-12-10 11:39:03.111855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:36.051 [2024-12-10 11:39:03.111865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:36.051 [2024-12-10 11:39:03.111874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.051 [2024-12-10 11:39:03.111884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:36.051 [2024-12-10 11:39:03.111893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:36.051 [2024-12-10 11:39:03.111902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.051 [2024-12-10 11:39:03.111911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:36.051 [2024-12-10 11:39:03.111921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:36.051 [2024-12-10 11:39:03.111941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.051 [2024-12-10 11:39:03.111951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:36.051 [2024-12-10 11:39:03.111960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:36.051 [2024-12-10 11:39:03.111970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.051 [2024-12-10 11:39:03.111978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:36.051 [2024-12-10 11:39:03.111988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:36.051 [2024-12-10 11:39:03.111997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.051 [2024-12-10 11:39:03.112006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:36.051 [2024-12-10 11:39:03.112015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:36.051 [2024-12-10 11:39:03.112024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.051 [2024-12-10 11:39:03.112033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:36.051 [2024-12-10 11:39:03.112043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:36.051 [2024-12-10 11:39:03.112051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:36.051 [2024-12-10 11:39:03.112060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:36.051 [2024-12-10 11:39:03.112069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:36.051 [2024-12-10 11:39:03.112078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:36.051 [2024-12-10 11:39:03.112087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:36.051 [2024-12-10 11:39:03.112096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:36.051 [2024-12-10 11:39:03.112105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.051 [2024-12-10 11:39:03.112115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:36.051 [2024-12-10 11:39:03.112123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:36.051 [2024-12-10 11:39:03.112133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.051 [2024-12-10 11:39:03.112142] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:36.051 [2024-12-10 11:39:03.112152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:36.051 [2024-12-10 11:39:03.112161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:36.051 [2024-12-10 11:39:03.112171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.051 [2024-12-10 11:39:03.112181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:36.051 [2024-12-10 11:39:03.112190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:36.051 [2024-12-10 11:39:03.112200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:36.051 [2024-12-10 11:39:03.112210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:36.051 [2024-12-10 11:39:03.112218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:36.051 [2024-12-10 11:39:03.112227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:36.051 [2024-12-10 11:39:03.112238] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:36.051 [2024-12-10 11:39:03.112250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:36.051 [2024-12-10 11:39:03.112265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:36.051 [2024-12-10 11:39:03.112276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:36.051 [2024-12-10 11:39:03.112286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:36.051 [2024-12-10 11:39:03.112296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:36.051 [2024-12-10 11:39:03.112306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:36.051 [2024-12-10 11:39:03.112317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:36.051 [2024-12-10 11:39:03.112333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:36.051 [2024-12-10 11:39:03.112343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:36.051 [2024-12-10 11:39:03.112354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:36.051 [2024-12-10 11:39:03.112364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:36.051 [2024-12-10 11:39:03.112374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:36.051 [2024-12-10 11:39:03.112385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:36.051 [2024-12-10 11:39:03.112395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:36.051 [2024-12-10 11:39:03.112405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:36.051 [2024-12-10 11:39:03.112415] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:36.051 [2024-12-10 11:39:03.112426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:36.051 [2024-12-10 11:39:03.112438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:36.051 [2024-12-10 11:39:03.112449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:36.051 [2024-12-10 11:39:03.112459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:36.051 [2024-12-10 11:39:03.112469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:36.051 [2024-12-10 11:39:03.112479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.051 [2024-12-10 11:39:03.112489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:36.051 [2024-12-10 11:39:03.112500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:31:36.051 [2024-12-10 11:39:03.112509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.051 [2024-12-10 11:39:03.150717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.051 [2024-12-10 11:39:03.150753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:36.051 [2024-12-10 11:39:03.150782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.224 ms 00:31:36.051 [2024-12-10 11:39:03.150798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.051 [2024-12-10 11:39:03.150872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.051 [2024-12-10 11:39:03.150883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:36.051 [2024-12-10 11:39:03.150894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:36.051 [2024-12-10 11:39:03.150903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.311 [2024-12-10 11:39:03.202414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.311 [2024-12-10 11:39:03.202450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:36.311 [2024-12-10 11:39:03.202463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.514 ms 00:31:36.311 [2024-12-10 11:39:03.202473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.311 [2024-12-10 11:39:03.202525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.311 [2024-12-10 11:39:03.202536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:36.311 [2024-12-10 11:39:03.202551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:36.311 [2024-12-10 11:39:03.202560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.311 [2024-12-10 11:39:03.203075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.311 [2024-12-10 11:39:03.203090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:36.311 [2024-12-10 11:39:03.203100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:31:36.311 [2024-12-10 11:39:03.203110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.311 [2024-12-10 11:39:03.203228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.311 [2024-12-10 11:39:03.203241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:36.311 [2024-12-10 11:39:03.203255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:31:36.311 [2024-12-10 11:39:03.203266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.311 [2024-12-10 11:39:03.222377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.311 [2024-12-10 11:39:03.222413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:36.311 [2024-12-10 11:39:03.222426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.121 ms 00:31:36.311 [2024-12-10 11:39:03.222436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.311 [2024-12-10 11:39:03.240606] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:36.311 [2024-12-10 11:39:03.240645] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:36.311 [2024-12-10 11:39:03.240675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.311 [2024-12-10 11:39:03.240687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:36.311 [2024-12-10 11:39:03.240698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.153 ms 00:31:36.311 [2024-12-10 11:39:03.240708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.311 [2024-12-10 11:39:03.268634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.311 [2024-12-10 11:39:03.268675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:36.311 [2024-12-10 11:39:03.268705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.930 ms 00:31:36.311 [2024-12-10 11:39:03.268715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.311 [2024-12-10 11:39:03.286276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.311 [2024-12-10 11:39:03.286314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:36.311 [2024-12-10 11:39:03.286342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.532 ms 00:31:36.311 [2024-12-10 11:39:03.286353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.311 [2024-12-10 11:39:03.304124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.311 [2024-12-10 11:39:03.304162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:36.311 [2024-12-10 11:39:03.304174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.763 ms 00:31:36.311 [2024-12-10 11:39:03.304184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.311 [2024-12-10 11:39:03.304906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.311 [2024-12-10 11:39:03.304950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:36.311 [2024-12-10 11:39:03.304967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:31:36.311 [2024-12-10 11:39:03.304977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.311 [2024-12-10 11:39:03.389992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.311 [2024-12-10 11:39:03.390042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:36.311 [2024-12-10 11:39:03.390065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.129 ms 00:31:36.312 [2024-12-10 11:39:03.390077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.312 [2024-12-10 11:39:03.400212] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:36.312 [2024-12-10 11:39:03.402719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.312 [2024-12-10 11:39:03.402751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:36.312 [2024-12-10 11:39:03.402779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.617 ms 00:31:36.312 [2024-12-10 11:39:03.402789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.312 [2024-12-10 11:39:03.402871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.312 [2024-12-10 11:39:03.402885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:36.312 [2024-12-10 11:39:03.402901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:36.312 [2024-12-10 11:39:03.402910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.312 [2024-12-10 11:39:03.403780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.312 [2024-12-10 11:39:03.403802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:36.312 [2024-12-10 11:39:03.403813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:31:36.312 [2024-12-10 11:39:03.403822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.312 [2024-12-10 11:39:03.403850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.312 [2024-12-10 11:39:03.403862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:36.312 [2024-12-10 11:39:03.403873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:36.312 [2024-12-10 11:39:03.403883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.312 [2024-12-10 11:39:03.403937] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:36.312 [2024-12-10 11:39:03.403951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.312 [2024-12-10 11:39:03.403962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:36.312 [2024-12-10 11:39:03.403972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:36.312 [2024-12-10 11:39:03.403982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.571 [2024-12-10 11:39:03.439026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.571 [2024-12-10 11:39:03.439064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:36.571 [2024-12-10 11:39:03.439099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.080 ms 00:31:36.571 [2024-12-10 11:39:03.439110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.571 [2024-12-10 11:39:03.439176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.571 [2024-12-10 11:39:03.439189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:36.571 [2024-12-10 11:39:03.439199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:36.571 [2024-12-10 11:39:03.439209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.571 [2024-12-10 11:39:03.440287] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 362.755 ms, result 0 00:31:37.950  [2024-12-10T11:39:06.000Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-10T11:39:06.938Z] Copying: 52/1024 [MB] (26 MBps) [2024-12-10T11:39:07.876Z] Copying: 78/1024 [MB] (26 MBps) [2024-12-10T11:39:08.813Z] Copying: 104/1024 [MB] (25 MBps) [2024-12-10T11:39:09.751Z] Copying: 130/1024 [MB] (25 MBps) [2024-12-10T11:39:10.687Z] Copying: 157/1024 [MB] (26 MBps) [2024-12-10T11:39:12.065Z] Copying: 183/1024 [MB] (26 MBps) [2024-12-10T11:39:12.633Z] Copying: 209/1024 [MB] (26 MBps) [2024-12-10T11:39:14.011Z] Copying: 235/1024 [MB] (25 MBps) [2024-12-10T11:39:14.948Z] Copying: 262/1024 [MB] (26 MBps) [2024-12-10T11:39:15.884Z] Copying: 288/1024 [MB] (26 MBps) [2024-12-10T11:39:16.821Z] Copying: 314/1024 [MB] (26 MBps) [2024-12-10T11:39:17.784Z] Copying: 341/1024 [MB] (26 MBps) [2024-12-10T11:39:18.786Z] Copying: 367/1024 [MB] (26 MBps) [2024-12-10T11:39:19.721Z] Copying: 393/1024 [MB] (26 MBps) [2024-12-10T11:39:20.657Z] Copying: 420/1024 [MB] (26 MBps) [2024-12-10T11:39:22.036Z] Copying: 446/1024 [MB] (26 MBps) [2024-12-10T11:39:22.974Z] Copying: 472/1024 [MB] (26 MBps) [2024-12-10T11:39:23.911Z] Copying: 498/1024 [MB] (26 MBps) [2024-12-10T11:39:24.849Z] Copying: 523/1024 [MB] (24 MBps) [2024-12-10T11:39:25.785Z] Copying: 550/1024 [MB] (26 MBps) [2024-12-10T11:39:26.723Z] Copying: 576/1024 [MB] (26 MBps) [2024-12-10T11:39:27.660Z] Copying: 602/1024 [MB] (26 MBps) [2024-12-10T11:39:29.040Z] Copying: 628/1024 [MB] (26 MBps) [2024-12-10T11:39:29.608Z] Copying: 654/1024 [MB] (25 MBps) [2024-12-10T11:39:30.987Z] Copying: 680/1024 [MB] (25 MBps) [2024-12-10T11:39:31.924Z] Copying: 706/1024 [MB] (25 MBps) [2024-12-10T11:39:32.861Z] Copying: 732/1024 [MB] (26 MBps) [2024-12-10T11:39:33.797Z] Copying: 757/1024 [MB] (25 MBps) [2024-12-10T11:39:34.734Z] Copying: 783/1024 [MB] (25 MBps) [2024-12-10T11:39:35.680Z] Copying: 809/1024 [MB] (26 MBps) [2024-12-10T11:39:36.616Z] Copying: 835/1024 [MB] (25 MBps) [2024-12-10T11:39:37.994Z] Copying: 859/1024 [MB] (24 MBps) [2024-12-10T11:39:38.932Z] Copying: 885/1024 [MB] (25 MBps) [2024-12-10T11:39:39.869Z] Copying: 911/1024 [MB] (26 MBps) [2024-12-10T11:39:40.806Z] Copying: 937/1024 [MB] (25 MBps) [2024-12-10T11:39:41.800Z] Copying: 963/1024 [MB] (25 MBps) [2024-12-10T11:39:42.735Z] Copying: 989/1024 [MB] (25 MBps) [2024-12-10T11:39:42.993Z] Copying: 1013/1024 [MB] (24 MBps) [2024-12-10T11:39:43.252Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-10 11:39:43.191864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.138 [2024-12-10 11:39:43.191989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:16.138 [2024-12-10 11:39:43.192019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:16.138 [2024-12-10 11:39:43.192040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.138 [2024-12-10 11:39:43.192083] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:16.138 [2024-12-10 11:39:43.200280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.138 [2024-12-10 11:39:43.200335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:16.138 [2024-12-10 11:39:43.200353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.177 ms 00:32:16.138 [2024-12-10 11:39:43.200366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.138 [2024-12-10 11:39:43.200621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.138 [2024-12-10 11:39:43.200637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:16.138 [2024-12-10 11:39:43.200651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:32:16.138 [2024-12-10 11:39:43.200663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.138 [2024-12-10 11:39:43.204577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.138 [2024-12-10 11:39:43.204628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:16.138 [2024-12-10 11:39:43.204643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.899 ms 00:32:16.138 [2024-12-10 11:39:43.204665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.138 [2024-12-10 11:39:43.210483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.138 [2024-12-10 11:39:43.210521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:16.138 [2024-12-10 11:39:43.210549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.794 ms 00:32:16.138 [2024-12-10 11:39:43.210559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.138 [2024-12-10 11:39:43.245651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.138 [2024-12-10 11:39:43.245690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:16.138 [2024-12-10 11:39:43.245703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.084 ms 00:32:16.138 [2024-12-10 11:39:43.245713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.398 [2024-12-10 11:39:43.265976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.398 [2024-12-10 11:39:43.266018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:16.398 [2024-12-10 11:39:43.266032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.238 ms 00:32:16.398 [2024-12-10 11:39:43.266042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.398 [2024-12-10 11:39:43.268053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.398 [2024-12-10 11:39:43.268089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:16.398 [2024-12-10 11:39:43.268101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.946 ms 00:32:16.398 [2024-12-10 11:39:43.268112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.398 [2024-12-10 11:39:43.302342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.398 [2024-12-10 11:39:43.302380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:16.398 [2024-12-10 11:39:43.302408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.269 ms 00:32:16.398 [2024-12-10 11:39:43.302418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.398 [2024-12-10 11:39:43.336473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.398 [2024-12-10 11:39:43.336507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:16.398 [2024-12-10 11:39:43.336519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.071 ms 00:32:16.398 [2024-12-10 11:39:43.336528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.398 [2024-12-10 11:39:43.370591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.398 [2024-12-10 11:39:43.370627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:16.398 [2024-12-10 11:39:43.370639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.065 ms 00:32:16.398 [2024-12-10 11:39:43.370648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.398 [2024-12-10 11:39:43.404845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.398 [2024-12-10 11:39:43.404878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:16.398 [2024-12-10 11:39:43.404890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.161 ms 00:32:16.398 [2024-12-10 11:39:43.404899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.398 [2024-12-10 11:39:43.404958] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:16.398 [2024-12-10 11:39:43.404981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:16.398 [2024-12-10 11:39:43.404997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:16.398 [2024-12-10 11:39:43.405008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:16.398 [2024-12-10 11:39:43.405019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:16.399 [2024-12-10 11:39:43.405997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:16.400 [2024-12-10 11:39:43.406008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:16.400 [2024-12-10 11:39:43.406018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:16.400 [2024-12-10 11:39:43.406028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:16.400 [2024-12-10 11:39:43.406039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:16.400 [2024-12-10 11:39:43.406056] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:16.400 [2024-12-10 11:39:43.406066] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d171327a-a22c-46d0-a605-b88a158f0097 00:32:16.400 [2024-12-10 11:39:43.406077] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:16.400 [2024-12-10 11:39:43.406086] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:16.400 [2024-12-10 11:39:43.406096] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:16.400 [2024-12-10 11:39:43.406106] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:16.400 [2024-12-10 11:39:43.406126] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:16.400 [2024-12-10 11:39:43.406137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:16.400 [2024-12-10 11:39:43.406147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:16.400 [2024-12-10 11:39:43.406156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:16.400 [2024-12-10 11:39:43.406165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:16.400 [2024-12-10 11:39:43.406175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.400 [2024-12-10 11:39:43.406185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:16.400 [2024-12-10 11:39:43.406196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:32:16.400 [2024-12-10 11:39:43.406209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.400 [2024-12-10 11:39:43.425648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.400 [2024-12-10 11:39:43.425681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:16.400 [2024-12-10 11:39:43.425693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.417 ms 00:32:16.400 [2024-12-10 11:39:43.425703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.400 [2024-12-10 11:39:43.426327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.400 [2024-12-10 11:39:43.426351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:16.400 [2024-12-10 11:39:43.426362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:32:16.400 [2024-12-10 11:39:43.426372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.400 [2024-12-10 11:39:43.475013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.400 [2024-12-10 11:39:43.475050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:16.400 [2024-12-10 11:39:43.475078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.400 [2024-12-10 11:39:43.475088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.400 [2024-12-10 11:39:43.475136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.400 [2024-12-10 11:39:43.475154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:16.400 [2024-12-10 11:39:43.475164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.400 [2024-12-10 11:39:43.475174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.400 [2024-12-10 11:39:43.475232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.400 [2024-12-10 11:39:43.475245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:16.400 [2024-12-10 11:39:43.475255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.400 [2024-12-10 11:39:43.475265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.400 [2024-12-10 11:39:43.475280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.400 [2024-12-10 11:39:43.475290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:16.400 [2024-12-10 11:39:43.475305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.400 [2024-12-10 11:39:43.475315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.659 [2024-12-10 11:39:43.592895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.659 [2024-12-10 11:39:43.592947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:16.659 [2024-12-10 11:39:43.592961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.659 [2024-12-10 11:39:43.592972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.659 [2024-12-10 11:39:43.687653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.659 [2024-12-10 11:39:43.687708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:16.659 [2024-12-10 11:39:43.687721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.659 [2024-12-10 11:39:43.687731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.659 [2024-12-10 11:39:43.687833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.659 [2024-12-10 11:39:43.687846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:16.659 [2024-12-10 11:39:43.687857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.659 [2024-12-10 11:39:43.687867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.659 [2024-12-10 11:39:43.687906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.659 [2024-12-10 11:39:43.687916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:16.659 [2024-12-10 11:39:43.687926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.659 [2024-12-10 11:39:43.687959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.659 [2024-12-10 11:39:43.688062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.659 [2024-12-10 11:39:43.688074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:16.659 [2024-12-10 11:39:43.688085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.659 [2024-12-10 11:39:43.688110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.659 [2024-12-10 11:39:43.688146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.659 [2024-12-10 11:39:43.688158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:16.659 [2024-12-10 11:39:43.688169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.659 [2024-12-10 11:39:43.688180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.659 [2024-12-10 11:39:43.688220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.659 [2024-12-10 11:39:43.688231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:16.659 [2024-12-10 11:39:43.688241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.659 [2024-12-10 11:39:43.688251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.659 [2024-12-10 11:39:43.688292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.659 [2024-12-10 11:39:43.688304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:16.659 [2024-12-10 11:39:43.688314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.659 [2024-12-10 11:39:43.688328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.659 [2024-12-10 11:39:43.688442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 497.384 ms, result 0 00:32:17.596 00:32:17.596 00:32:17.855 11:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:19.760 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:32:19.760 11:39:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:32:19.760 11:39:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:32:19.760 11:39:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:19.760 11:39:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:19.760 11:39:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:19.760 11:39:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:19.760 11:39:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:19.760 11:39:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81407 00:32:19.760 11:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81407 ']' 00:32:19.760 11:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81407 00:32:19.760 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81407) - No such process 00:32:19.760 Process with pid 81407 is not found 00:32:19.760 11:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81407 is not found' 00:32:19.760 11:39:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:32:20.020 11:39:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:32:20.020 Remove shared memory files 00:32:20.020 11:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:20.020 11:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:20.020 11:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:20.020 11:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:32:20.020 11:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:20.020 11:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:20.020 00:32:20.020 real 3m43.628s 00:32:20.020 user 4m13.588s 00:32:20.020 sys 0m39.866s 00:32:20.020 11:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:20.020 11:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:20.020 ************************************ 00:32:20.020 END TEST ftl_dirty_shutdown 00:32:20.020 ************************************ 00:32:20.020 11:39:46 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:20.020 11:39:46 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:20.020 11:39:46 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:20.020 11:39:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:20.020 ************************************ 00:32:20.020 START TEST ftl_upgrade_shutdown 00:32:20.020 ************************************ 00:32:20.020 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:20.280 * Looking for test storage... 00:32:20.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:20.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.280 --rc genhtml_branch_coverage=1 00:32:20.280 --rc genhtml_function_coverage=1 00:32:20.280 --rc genhtml_legend=1 00:32:20.280 --rc geninfo_all_blocks=1 00:32:20.280 --rc geninfo_unexecuted_blocks=1 00:32:20.280 00:32:20.280 ' 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:20.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.280 --rc genhtml_branch_coverage=1 00:32:20.280 --rc genhtml_function_coverage=1 00:32:20.280 --rc genhtml_legend=1 00:32:20.280 --rc geninfo_all_blocks=1 00:32:20.280 --rc geninfo_unexecuted_blocks=1 00:32:20.280 00:32:20.280 ' 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:20.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.280 --rc genhtml_branch_coverage=1 00:32:20.280 --rc genhtml_function_coverage=1 00:32:20.280 --rc genhtml_legend=1 00:32:20.280 --rc geninfo_all_blocks=1 00:32:20.280 --rc geninfo_unexecuted_blocks=1 00:32:20.280 00:32:20.280 ' 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:20.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.280 --rc genhtml_branch_coverage=1 00:32:20.280 --rc genhtml_function_coverage=1 00:32:20.280 --rc genhtml_legend=1 00:32:20.280 --rc geninfo_all_blocks=1 00:32:20.280 --rc geninfo_unexecuted_blocks=1 00:32:20.280 00:32:20.280 ' 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:20.280 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:20.281 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:20.281 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83781 00:32:20.281 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:32:20.281 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:20.281 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83781 00:32:20.281 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83781 ']' 00:32:20.281 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.281 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:20.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.281 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.281 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:20.281 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:20.281 [2024-12-10 11:39:47.390121] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:32:20.281 [2024-12-10 11:39:47.390242] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83781 ] 00:32:20.540 [2024-12-10 11:39:47.570999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.800 [2024-12-10 11:39:47.673508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:21.368 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:32:21.627 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:32:21.627 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:32:21.627 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:21.627 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:32:21.627 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:21.627 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:32:21.886 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:32:21.886 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:21.886 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:32:21.886 11:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:32:21.886 11:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:21.886 11:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:21.886 11:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:21.886 11:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:32:22.145 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:22.145 { 00:32:22.145 "name": "basen1", 00:32:22.145 "aliases": [ 00:32:22.145 "8fb995e1-ccfd-46e1-8b3a-be6b9a785cae" 00:32:22.145 ], 00:32:22.145 "product_name": "NVMe disk", 00:32:22.145 "block_size": 4096, 00:32:22.145 "num_blocks": 1310720, 00:32:22.145 "uuid": "8fb995e1-ccfd-46e1-8b3a-be6b9a785cae", 00:32:22.145 "numa_id": -1, 00:32:22.145 "assigned_rate_limits": { 00:32:22.145 "rw_ios_per_sec": 0, 00:32:22.145 "rw_mbytes_per_sec": 0, 00:32:22.145 "r_mbytes_per_sec": 0, 00:32:22.145 "w_mbytes_per_sec": 0 00:32:22.145 }, 00:32:22.145 "claimed": true, 00:32:22.145 "claim_type": "read_many_write_one", 00:32:22.145 "zoned": false, 00:32:22.145 "supported_io_types": { 00:32:22.145 "read": true, 00:32:22.145 "write": true, 00:32:22.145 "unmap": true, 00:32:22.145 "flush": true, 00:32:22.145 "reset": true, 00:32:22.145 "nvme_admin": true, 00:32:22.145 "nvme_io": true, 00:32:22.145 "nvme_io_md": false, 00:32:22.145 "write_zeroes": true, 00:32:22.145 "zcopy": false, 00:32:22.145 "get_zone_info": false, 00:32:22.145 "zone_management": false, 00:32:22.145 "zone_append": false, 00:32:22.145 "compare": true, 00:32:22.145 "compare_and_write": false, 00:32:22.145 "abort": true, 00:32:22.145 "seek_hole": false, 00:32:22.145 "seek_data": false, 00:32:22.145 "copy": true, 00:32:22.145 "nvme_iov_md": false 00:32:22.145 }, 00:32:22.145 "driver_specific": { 00:32:22.145 "nvme": [ 00:32:22.145 { 00:32:22.145 "pci_address": "0000:00:11.0", 00:32:22.145 "trid": { 00:32:22.145 "trtype": "PCIe", 00:32:22.145 "traddr": "0000:00:11.0" 00:32:22.145 }, 00:32:22.145 "ctrlr_data": { 00:32:22.145 "cntlid": 0, 00:32:22.145 "vendor_id": "0x1b36", 00:32:22.145 "model_number": "QEMU NVMe Ctrl", 00:32:22.145 "serial_number": "12341", 00:32:22.145 "firmware_revision": "8.0.0", 00:32:22.145 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:22.145 "oacs": { 00:32:22.145 "security": 0, 00:32:22.145 "format": 1, 00:32:22.145 "firmware": 0, 00:32:22.145 "ns_manage": 1 00:32:22.145 }, 00:32:22.145 "multi_ctrlr": false, 00:32:22.145 "ana_reporting": false 00:32:22.145 }, 00:32:22.145 "vs": { 00:32:22.145 "nvme_version": "1.4" 00:32:22.145 }, 00:32:22.145 "ns_data": { 00:32:22.145 "id": 1, 00:32:22.145 "can_share": false 00:32:22.145 } 00:32:22.145 } 00:32:22.145 ], 00:32:22.145 "mp_policy": "active_passive" 00:32:22.145 } 00:32:22.145 } 00:32:22.145 ]' 00:32:22.145 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:22.145 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:22.145 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:22.145 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:22.145 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:22.145 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:32:22.145 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:22.145 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:32:22.145 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:22.145 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:22.145 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:22.405 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=4f4b279e-b1f5-4354-b1ba-30fe383e8b5c 00:32:22.405 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:22.405 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4f4b279e-b1f5-4354-b1ba-30fe383e8b5c 00:32:22.405 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:32:22.664 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=1416ac0e-e00e-4495-8a49-64c237232957 00:32:22.664 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 1416ac0e-e00e-4495-8a49-64c237232957 00:32:22.923 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=02a685c7-5470-4982-992e-017aff35b031 00:32:22.923 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 02a685c7-5470-4982-992e-017aff35b031 ]] 00:32:22.923 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 02a685c7-5470-4982-992e-017aff35b031 5120 00:32:22.923 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:32:22.923 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:22.923 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=02a685c7-5470-4982-992e-017aff35b031 00:32:22.923 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:32:22.923 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 02a685c7-5470-4982-992e-017aff35b031 00:32:22.923 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=02a685c7-5470-4982-992e-017aff35b031 00:32:22.923 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:22.923 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:22.923 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:22.923 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02a685c7-5470-4982-992e-017aff35b031 00:32:23.182 11:39:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:23.182 { 00:32:23.182 "name": "02a685c7-5470-4982-992e-017aff35b031", 00:32:23.182 "aliases": [ 00:32:23.182 "lvs/basen1p0" 00:32:23.182 ], 00:32:23.182 "product_name": "Logical Volume", 00:32:23.182 "block_size": 4096, 00:32:23.182 "num_blocks": 5242880, 00:32:23.182 "uuid": "02a685c7-5470-4982-992e-017aff35b031", 00:32:23.182 "assigned_rate_limits": { 00:32:23.182 "rw_ios_per_sec": 0, 00:32:23.182 "rw_mbytes_per_sec": 0, 00:32:23.182 "r_mbytes_per_sec": 0, 00:32:23.182 "w_mbytes_per_sec": 0 00:32:23.182 }, 00:32:23.182 "claimed": false, 00:32:23.182 "zoned": false, 00:32:23.182 "supported_io_types": { 00:32:23.182 "read": true, 00:32:23.182 "write": true, 00:32:23.182 "unmap": true, 00:32:23.182 "flush": false, 00:32:23.182 "reset": true, 00:32:23.182 "nvme_admin": false, 00:32:23.182 "nvme_io": false, 00:32:23.182 "nvme_io_md": false, 00:32:23.182 "write_zeroes": true, 00:32:23.182 "zcopy": false, 00:32:23.182 "get_zone_info": false, 00:32:23.182 "zone_management": false, 00:32:23.182 "zone_append": false, 00:32:23.182 "compare": false, 00:32:23.182 "compare_and_write": false, 00:32:23.182 "abort": false, 00:32:23.182 "seek_hole": true, 00:32:23.182 "seek_data": true, 00:32:23.182 "copy": false, 00:32:23.182 "nvme_iov_md": false 00:32:23.182 }, 00:32:23.182 "driver_specific": { 00:32:23.182 "lvol": { 00:32:23.182 "lvol_store_uuid": "1416ac0e-e00e-4495-8a49-64c237232957", 00:32:23.182 "base_bdev": "basen1", 00:32:23.182 "thin_provision": true, 00:32:23.182 "num_allocated_clusters": 0, 00:32:23.182 "snapshot": false, 00:32:23.182 "clone": false, 00:32:23.182 "esnap_clone": false 00:32:23.182 } 00:32:23.182 } 00:32:23.182 } 00:32:23.182 ]' 00:32:23.182 11:39:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:23.182 11:39:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:23.182 11:39:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:23.182 11:39:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:32:23.182 11:39:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:32:23.182 11:39:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:32:23.182 11:39:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:32:23.182 11:39:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:32:23.182 11:39:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:32:23.442 11:39:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:32:23.442 11:39:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:32:23.442 11:39:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:32:23.701 11:39:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:32:23.701 11:39:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:32:23.701 11:39:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 02a685c7-5470-4982-992e-017aff35b031 -c cachen1p0 --l2p_dram_limit 2 00:32:23.961 [2024-12-10 11:39:50.832335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.961 [2024-12-10 11:39:50.832385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:23.961 [2024-12-10 11:39:50.832419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:23.961 [2024-12-10 11:39:50.832430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.961 [2024-12-10 11:39:50.832495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.961 [2024-12-10 11:39:50.832507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:23.961 [2024-12-10 11:39:50.832520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:32:23.961 [2024-12-10 11:39:50.832530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.961 [2024-12-10 11:39:50.832553] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:23.961 [2024-12-10 11:39:50.833561] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:23.961 [2024-12-10 11:39:50.833602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.961 [2024-12-10 11:39:50.833614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:23.961 [2024-12-10 11:39:50.833630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.051 ms 00:32:23.961 [2024-12-10 11:39:50.833640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.961 [2024-12-10 11:39:50.833722] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 111c6f31-3cf4-4c72-912f-760942f15cd4 00:32:23.961 [2024-12-10 11:39:50.835192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.961 [2024-12-10 11:39:50.835230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:32:23.961 [2024-12-10 11:39:50.835243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:32:23.961 [2024-12-10 11:39:50.835256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.961 [2024-12-10 11:39:50.842982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.961 [2024-12-10 11:39:50.843019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:23.961 [2024-12-10 11:39:50.843047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.697 ms 00:32:23.961 [2024-12-10 11:39:50.843059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.961 [2024-12-10 11:39:50.843107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.961 [2024-12-10 11:39:50.843123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:23.961 [2024-12-10 11:39:50.843134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:32:23.961 [2024-12-10 11:39:50.843149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.961 [2024-12-10 11:39:50.843218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.961 [2024-12-10 11:39:50.843234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:23.961 [2024-12-10 11:39:50.843248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:23.961 [2024-12-10 11:39:50.843260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.961 [2024-12-10 11:39:50.843284] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:23.961 [2024-12-10 11:39:50.848289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.961 [2024-12-10 11:39:50.848323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:23.961 [2024-12-10 11:39:50.848339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.017 ms 00:32:23.961 [2024-12-10 11:39:50.848366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.961 [2024-12-10 11:39:50.848400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.961 [2024-12-10 11:39:50.848412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:23.961 [2024-12-10 11:39:50.848425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:23.961 [2024-12-10 11:39:50.848436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.961 [2024-12-10 11:39:50.848488] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:32:23.961 [2024-12-10 11:39:50.848620] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:23.961 [2024-12-10 11:39:50.848640] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:23.961 [2024-12-10 11:39:50.848654] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:23.961 [2024-12-10 11:39:50.848670] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:23.961 [2024-12-10 11:39:50.848682] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:23.961 [2024-12-10 11:39:50.848696] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:23.961 [2024-12-10 11:39:50.848706] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:23.961 [2024-12-10 11:39:50.848722] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:23.961 [2024-12-10 11:39:50.848732] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:23.961 [2024-12-10 11:39:50.848745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.961 [2024-12-10 11:39:50.848755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:23.961 [2024-12-10 11:39:50.848768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.259 ms 00:32:23.961 [2024-12-10 11:39:50.848778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.961 [2024-12-10 11:39:50.848855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.961 [2024-12-10 11:39:50.848877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:23.961 [2024-12-10 11:39:50.848890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:32:23.961 [2024-12-10 11:39:50.848901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.961 [2024-12-10 11:39:50.849002] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:23.961 [2024-12-10 11:39:50.849015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:23.961 [2024-12-10 11:39:50.849027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:23.961 [2024-12-10 11:39:50.849037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.961 [2024-12-10 11:39:50.849051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:23.961 [2024-12-10 11:39:50.849060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:23.961 [2024-12-10 11:39:50.849072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:23.961 [2024-12-10 11:39:50.849081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:23.961 [2024-12-10 11:39:50.849093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:23.961 [2024-12-10 11:39:50.849102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.961 [2024-12-10 11:39:50.849113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:23.961 [2024-12-10 11:39:50.849123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:23.961 [2024-12-10 11:39:50.849136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.961 [2024-12-10 11:39:50.849145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:23.961 [2024-12-10 11:39:50.849157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:23.961 [2024-12-10 11:39:50.849166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.961 [2024-12-10 11:39:50.849180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:23.961 [2024-12-10 11:39:50.849189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:23.961 [2024-12-10 11:39:50.849201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.961 [2024-12-10 11:39:50.849210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:23.961 [2024-12-10 11:39:50.849221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:23.961 [2024-12-10 11:39:50.849230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:23.961 [2024-12-10 11:39:50.849242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:23.961 [2024-12-10 11:39:50.849251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:23.961 [2024-12-10 11:39:50.849262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:23.961 [2024-12-10 11:39:50.849272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:23.961 [2024-12-10 11:39:50.849284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:23.961 [2024-12-10 11:39:50.849292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:23.961 [2024-12-10 11:39:50.849304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:23.961 [2024-12-10 11:39:50.849313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:23.961 [2024-12-10 11:39:50.849324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:23.961 [2024-12-10 11:39:50.849333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:23.961 [2024-12-10 11:39:50.849347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:23.961 [2024-12-10 11:39:50.849356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.961 [2024-12-10 11:39:50.849368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:23.962 [2024-12-10 11:39:50.849377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:23.962 [2024-12-10 11:39:50.849388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.962 [2024-12-10 11:39:50.849397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:23.962 [2024-12-10 11:39:50.849410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:23.962 [2024-12-10 11:39:50.849419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.962 [2024-12-10 11:39:50.849430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:23.962 [2024-12-10 11:39:50.849439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:23.962 [2024-12-10 11:39:50.849451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.962 [2024-12-10 11:39:50.849459] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:23.962 [2024-12-10 11:39:50.849472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:23.962 [2024-12-10 11:39:50.849482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:23.962 [2024-12-10 11:39:50.849494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.962 [2024-12-10 11:39:50.849504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:23.962 [2024-12-10 11:39:50.849518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:23.962 [2024-12-10 11:39:50.849527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:23.962 [2024-12-10 11:39:50.849539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:23.962 [2024-12-10 11:39:50.849558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:23.962 [2024-12-10 11:39:50.849570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:23.962 [2024-12-10 11:39:50.849581] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:23.962 [2024-12-10 11:39:50.849600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:23.962 [2024-12-10 11:39:50.849611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:23.962 [2024-12-10 11:39:50.849624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:23.962 [2024-12-10 11:39:50.849635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:23.962 [2024-12-10 11:39:50.849648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:23.962 [2024-12-10 11:39:50.849659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:23.962 [2024-12-10 11:39:50.849672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:23.962 [2024-12-10 11:39:50.849682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:23.962 [2024-12-10 11:39:50.849695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:23.962 [2024-12-10 11:39:50.849706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:23.962 [2024-12-10 11:39:50.849722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:23.962 [2024-12-10 11:39:50.849732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:23.962 [2024-12-10 11:39:50.849745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:23.962 [2024-12-10 11:39:50.849755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:23.962 [2024-12-10 11:39:50.849768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:23.962 [2024-12-10 11:39:50.849778] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:23.962 [2024-12-10 11:39:50.849791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:23.962 [2024-12-10 11:39:50.849802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:23.962 [2024-12-10 11:39:50.849815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:23.962 [2024-12-10 11:39:50.849825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:23.962 [2024-12-10 11:39:50.849838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:23.962 [2024-12-10 11:39:50.849848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.962 [2024-12-10 11:39:50.849860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:23.962 [2024-12-10 11:39:50.849871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.905 ms 00:32:23.962 [2024-12-10 11:39:50.849883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.962 [2024-12-10 11:39:50.849931] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:23.962 [2024-12-10 11:39:50.849957] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:28.156 [2024-12-10 11:39:54.449311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.449384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:28.156 [2024-12-10 11:39:54.449400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3605.222 ms 00:32:28.156 [2024-12-10 11:39:54.449429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.486760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.486835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:28.156 [2024-12-10 11:39:54.486851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.972 ms 00:32:28.156 [2024-12-10 11:39:54.486864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.486950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.486967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:28.156 [2024-12-10 11:39:54.486979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:32:28.156 [2024-12-10 11:39:54.486999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.532173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.532219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:28.156 [2024-12-10 11:39:54.532232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.188 ms 00:32:28.156 [2024-12-10 11:39:54.532246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.532296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.532312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:28.156 [2024-12-10 11:39:54.532323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:28.156 [2024-12-10 11:39:54.532335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.532814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.532840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:28.156 [2024-12-10 11:39:54.532860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.430 ms 00:32:28.156 [2024-12-10 11:39:54.532874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.532913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.532938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:28.156 [2024-12-10 11:39:54.532952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:32:28.156 [2024-12-10 11:39:54.532976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.551286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.551331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:28.156 [2024-12-10 11:39:54.551344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.320 ms 00:32:28.156 [2024-12-10 11:39:54.551356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.588967] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:28.156 [2024-12-10 11:39:54.590291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.590334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:28.156 [2024-12-10 11:39:54.590358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.900 ms 00:32:28.156 [2024-12-10 11:39:54.590373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.624635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.624675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:32:28.156 [2024-12-10 11:39:54.624691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.268 ms 00:32:28.156 [2024-12-10 11:39:54.624702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.624809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.624825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:28.156 [2024-12-10 11:39:54.624842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:32:28.156 [2024-12-10 11:39:54.624852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.658708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.658743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:32:28.156 [2024-12-10 11:39:54.658760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.855 ms 00:32:28.156 [2024-12-10 11:39:54.658770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.692748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.692783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:32:28.156 [2024-12-10 11:39:54.692798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.968 ms 00:32:28.156 [2024-12-10 11:39:54.692808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.693618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.693648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:28.156 [2024-12-10 11:39:54.693662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.755 ms 00:32:28.156 [2024-12-10 11:39:54.693675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.790325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.790365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:32:28.156 [2024-12-10 11:39:54.790385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 96.731 ms 00:32:28.156 [2024-12-10 11:39:54.790395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.825538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.825584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:32:28.156 [2024-12-10 11:39:54.825600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.102 ms 00:32:28.156 [2024-12-10 11:39:54.825610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.859784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.859822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:32:28.156 [2024-12-10 11:39:54.859838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.169 ms 00:32:28.156 [2024-12-10 11:39:54.859847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.894969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.895011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:28.156 [2024-12-10 11:39:54.895027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.121 ms 00:32:28.156 [2024-12-10 11:39:54.895037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.895084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.895097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:28.156 [2024-12-10 11:39:54.895113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:28.156 [2024-12-10 11:39:54.895123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.895219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.156 [2024-12-10 11:39:54.895235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:28.156 [2024-12-10 11:39:54.895248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:32:28.156 [2024-12-10 11:39:54.895257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.156 [2024-12-10 11:39:54.896237] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4070.084 ms, result 0 00:32:28.156 { 00:32:28.156 "name": "ftl", 00:32:28.156 "uuid": "111c6f31-3cf4-4c72-912f-760942f15cd4" 00:32:28.156 } 00:32:28.156 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:32:28.156 [2024-12-10 11:39:55.115217] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.156 11:39:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:32:28.415 11:39:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:32:28.415 [2024-12-10 11:39:55.523161] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:28.674 11:39:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:32:28.674 [2024-12-10 11:39:55.728613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:28.674 11:39:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:29.243 Fill FTL, iteration 1 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83909 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83909 /var/tmp/spdk.tgt.sock 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83909 ']' 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.243 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:29.243 [2024-12-10 11:39:56.219628] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:32:29.243 [2024-12-10 11:39:56.220189] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83909 ] 00:32:29.502 [2024-12-10 11:39:56.396598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.502 [2024-12-10 11:39:56.506003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.445 11:39:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.445 11:39:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:30.445 11:39:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:32:30.705 ftln1 00:32:30.705 11:39:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:32:30.705 11:39:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:32:30.964 11:39:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:32:30.964 11:39:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83909 00:32:30.965 11:39:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83909 ']' 00:32:30.965 11:39:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83909 00:32:30.965 11:39:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:30.965 11:39:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:30.965 11:39:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83909 00:32:30.965 11:39:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:30.965 11:39:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:30.965 killing process with pid 83909 00:32:30.965 11:39:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83909' 00:32:30.965 11:39:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83909 00:32:30.965 11:39:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83909 00:32:33.502 11:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:32:33.502 11:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:33.502 [2024-12-10 11:40:00.424744] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:32:33.502 [2024-12-10 11:40:00.424880] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83967 ] 00:32:33.502 [2024-12-10 11:40:00.601513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.762 [2024-12-10 11:40:00.726835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.141  [2024-12-10T11:40:03.633Z] Copying: 241/1024 [MB] (241 MBps) [2024-12-10T11:40:04.602Z] Copying: 485/1024 [MB] (244 MBps) [2024-12-10T11:40:05.539Z] Copying: 731/1024 [MB] (246 MBps) [2024-12-10T11:40:05.539Z] Copying: 976/1024 [MB] (245 MBps) [2024-12-10T11:40:06.917Z] Copying: 1024/1024 [MB] (average 243 MBps) 00:32:39.803 00:32:39.803 11:40:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:32:39.803 Calculate MD5 checksum, iteration 1 00:32:39.804 11:40:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:32:39.804 11:40:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:39.804 11:40:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:39.804 11:40:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:39.804 11:40:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:39.804 11:40:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:39.804 11:40:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:39.804 [2024-12-10 11:40:06.643727] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:32:39.804 [2024-12-10 11:40:06.643852] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84033 ] 00:32:39.804 [2024-12-10 11:40:06.829071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.063 [2024-12-10 11:40:06.929670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.441  [2024-12-10T11:40:09.123Z] Copying: 613/1024 [MB] (613 MBps) [2024-12-10T11:40:10.060Z] Copying: 1024/1024 [MB] (average 608 MBps) 00:32:42.946 00:32:42.946 11:40:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:32:42.946 11:40:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:44.851 11:40:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:44.851 Fill FTL, iteration 2 00:32:44.851 11:40:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=781888b064a719c07bd4599fbe150746 00:32:44.851 11:40:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:44.851 11:40:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:44.851 11:40:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:32:44.851 11:40:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:44.851 11:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:44.851 11:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:44.851 11:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:44.851 11:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:44.851 11:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:44.851 [2024-12-10 11:40:11.726555] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:32:44.851 [2024-12-10 11:40:11.726895] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84092 ] 00:32:44.851 [2024-12-10 11:40:11.909289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.111 [2024-12-10 11:40:12.017918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.489  [2024-12-10T11:40:14.542Z] Copying: 241/1024 [MB] (241 MBps) [2024-12-10T11:40:15.479Z] Copying: 481/1024 [MB] (240 MBps) [2024-12-10T11:40:16.856Z] Copying: 726/1024 [MB] (245 MBps) [2024-12-10T11:40:16.856Z] Copying: 971/1024 [MB] (245 MBps) [2024-12-10T11:40:17.794Z] Copying: 1024/1024 [MB] (average 242 MBps) 00:32:50.680 00:32:50.940 Calculate MD5 checksum, iteration 2 00:32:50.940 11:40:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:32:50.940 11:40:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:32:50.940 11:40:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:50.940 11:40:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:50.940 11:40:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:50.940 11:40:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:50.940 11:40:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:50.940 11:40:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:50.940 [2024-12-10 11:40:17.883601] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:32:50.940 [2024-12-10 11:40:17.883735] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84157 ] 00:32:51.199 [2024-12-10 11:40:18.064134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.199 [2024-12-10 11:40:18.168349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.114  [2024-12-10T11:40:20.795Z] Copying: 613/1024 [MB] (613 MBps) [2024-12-10T11:40:21.732Z] Copying: 1024/1024 [MB] (average 602 MBps) 00:32:54.618 00:32:54.618 11:40:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:32:54.618 11:40:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:56.525 11:40:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:56.525 11:40:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=6780fdae5a80b0b6a2d4c5a8281521a2 00:32:56.526 11:40:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:56.526 11:40:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:56.526 11:40:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:56.526 [2024-12-10 11:40:23.537205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:56.526 [2024-12-10 11:40:23.537263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:56.526 [2024-12-10 11:40:23.537282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:32:56.526 [2024-12-10 11:40:23.537294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:56.526 [2024-12-10 11:40:23.537321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:56.526 [2024-12-10 11:40:23.537337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:56.526 [2024-12-10 11:40:23.537350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:56.526 [2024-12-10 11:40:23.537361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:56.526 [2024-12-10 11:40:23.537383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:56.526 [2024-12-10 11:40:23.537397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:56.526 [2024-12-10 11:40:23.537415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:56.526 [2024-12-10 11:40:23.537426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:56.526 [2024-12-10 11:40:23.537500] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.282 ms, result 0 00:32:56.526 true 00:32:56.526 11:40:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:56.785 { 00:32:56.785 "name": "ftl", 00:32:56.785 "properties": [ 00:32:56.785 { 00:32:56.785 "name": "superblock_version", 00:32:56.785 "value": 5, 00:32:56.785 "read-only": true 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "name": "base_device", 00:32:56.785 "bands": [ 00:32:56.785 { 00:32:56.785 "id": 0, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 1, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 2, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 3, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 4, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 5, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 6, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 7, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 8, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 9, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 10, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 11, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 12, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 13, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 14, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 15, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 16, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 17, 00:32:56.785 "state": "FREE", 00:32:56.785 "validity": 0.0 00:32:56.785 } 00:32:56.785 ], 00:32:56.785 "read-only": true 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "name": "cache_device", 00:32:56.785 "type": "bdev", 00:32:56.785 "chunks": [ 00:32:56.785 { 00:32:56.785 "id": 0, 00:32:56.785 "state": "INACTIVE", 00:32:56.785 "utilization": 0.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 1, 00:32:56.785 "state": "CLOSED", 00:32:56.785 "utilization": 1.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 2, 00:32:56.785 "state": "CLOSED", 00:32:56.785 "utilization": 1.0 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 3, 00:32:56.785 "state": "OPEN", 00:32:56.785 "utilization": 0.001953125 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "id": 4, 00:32:56.785 "state": "OPEN", 00:32:56.785 "utilization": 0.0 00:32:56.785 } 00:32:56.785 ], 00:32:56.785 "read-only": true 00:32:56.785 }, 00:32:56.785 { 00:32:56.785 "name": "verbose_mode", 00:32:56.785 "value": true, 00:32:56.785 "unit": "", 00:32:56.786 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:56.786 }, 00:32:56.786 { 00:32:56.786 "name": "prep_upgrade_on_shutdown", 00:32:56.786 "value": false, 00:32:56.786 "unit": "", 00:32:56.786 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:56.786 } 00:32:56.786 ] 00:32:56.786 } 00:32:56.786 11:40:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:32:57.045 [2024-12-10 11:40:23.925112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.045 [2024-12-10 11:40:23.925164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:57.045 [2024-12-10 11:40:23.925179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:57.045 [2024-12-10 11:40:23.925189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.045 [2024-12-10 11:40:23.925212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.045 [2024-12-10 11:40:23.925223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:57.045 [2024-12-10 11:40:23.925234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:57.045 [2024-12-10 11:40:23.925243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.045 [2024-12-10 11:40:23.925262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.045 [2024-12-10 11:40:23.925272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:57.045 [2024-12-10 11:40:23.925281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:57.045 [2024-12-10 11:40:23.925291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.045 [2024-12-10 11:40:23.925344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.229 ms, result 0 00:32:57.045 true 00:32:57.045 11:40:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:32:57.045 11:40:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:57.045 11:40:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:57.045 11:40:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:32:57.045 11:40:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:32:57.045 11:40:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:57.304 [2024-12-10 11:40:24.280862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.304 [2024-12-10 11:40:24.280901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:57.304 [2024-12-10 11:40:24.280926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:57.304 [2024-12-10 11:40:24.280937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.304 [2024-12-10 11:40:24.280961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.304 [2024-12-10 11:40:24.280972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:57.304 [2024-12-10 11:40:24.280983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:57.304 [2024-12-10 11:40:24.280992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.304 [2024-12-10 11:40:24.281011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.304 [2024-12-10 11:40:24.281021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:57.304 [2024-12-10 11:40:24.281031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:57.304 [2024-12-10 11:40:24.281041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.304 [2024-12-10 11:40:24.281092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.217 ms, result 0 00:32:57.304 true 00:32:57.304 11:40:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:57.563 { 00:32:57.563 "name": "ftl", 00:32:57.563 "properties": [ 00:32:57.563 { 00:32:57.563 "name": "superblock_version", 00:32:57.563 "value": 5, 00:32:57.563 "read-only": true 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "name": "base_device", 00:32:57.563 "bands": [ 00:32:57.563 { 00:32:57.563 "id": 0, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 1, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 2, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 3, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 4, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 5, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 6, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 7, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 8, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 9, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 10, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 11, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 12, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 13, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 14, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 15, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 16, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 17, 00:32:57.563 "state": "FREE", 00:32:57.563 "validity": 0.0 00:32:57.563 } 00:32:57.563 ], 00:32:57.563 "read-only": true 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "name": "cache_device", 00:32:57.563 "type": "bdev", 00:32:57.563 "chunks": [ 00:32:57.563 { 00:32:57.563 "id": 0, 00:32:57.563 "state": "INACTIVE", 00:32:57.563 "utilization": 0.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 1, 00:32:57.563 "state": "CLOSED", 00:32:57.563 "utilization": 1.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 2, 00:32:57.563 "state": "CLOSED", 00:32:57.563 "utilization": 1.0 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 3, 00:32:57.563 "state": "OPEN", 00:32:57.563 "utilization": 0.001953125 00:32:57.563 }, 00:32:57.563 { 00:32:57.563 "id": 4, 00:32:57.563 "state": "OPEN", 00:32:57.563 "utilization": 0.0 00:32:57.563 } 00:32:57.563 ], 00:32:57.563 "read-only": true 00:32:57.564 }, 00:32:57.564 { 00:32:57.564 "name": "verbose_mode", 00:32:57.564 "value": true, 00:32:57.564 "unit": "", 00:32:57.564 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:57.564 }, 00:32:57.564 { 00:32:57.564 "name": "prep_upgrade_on_shutdown", 00:32:57.564 "value": true, 00:32:57.564 "unit": "", 00:32:57.564 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:57.564 } 00:32:57.564 ] 00:32:57.564 } 00:32:57.564 11:40:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:32:57.564 11:40:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83781 ]] 00:32:57.564 11:40:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83781 00:32:57.564 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83781 ']' 00:32:57.564 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83781 00:32:57.564 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:57.564 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.564 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83781 00:32:57.564 killing process with pid 83781 00:32:57.564 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:57.564 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:57.564 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83781' 00:32:57.564 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83781 00:32:57.564 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83781 00:32:58.943 [2024-12-10 11:40:25.693413] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:58.943 [2024-12-10 11:40:25.712425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.943 [2024-12-10 11:40:25.712469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:58.943 [2024-12-10 11:40:25.712486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:58.943 [2024-12-10 11:40:25.712496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.943 [2024-12-10 11:40:25.712521] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:58.943 [2024-12-10 11:40:25.716906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.943 [2024-12-10 11:40:25.716943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:58.943 [2024-12-10 11:40:25.716956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.375 ms 00:32:58.943 [2024-12-10 11:40:25.716972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.130 [2024-12-10 11:40:32.788956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:07.130 [2024-12-10 11:40:32.789025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:07.130 [2024-12-10 11:40:32.789046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7083.429 ms 00:33:07.130 [2024-12-10 11:40:32.789073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.130 [2024-12-10 11:40:32.790098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:07.130 [2024-12-10 11:40:32.790131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:07.130 [2024-12-10 11:40:32.790144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.008 ms 00:33:07.130 [2024-12-10 11:40:32.790154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.130 [2024-12-10 11:40:32.791092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:07.130 [2024-12-10 11:40:32.791118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:07.130 [2024-12-10 11:40:32.791132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.896 ms 00:33:07.130 [2024-12-10 11:40:32.791149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.130 [2024-12-10 11:40:32.806008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:07.130 [2024-12-10 11:40:32.806050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:07.130 [2024-12-10 11:40:32.806070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.829 ms 00:33:07.130 [2024-12-10 11:40:32.806080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.130 [2024-12-10 11:40:32.814998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:07.130 [2024-12-10 11:40:32.815039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:07.130 [2024-12-10 11:40:32.815069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.878 ms 00:33:07.130 [2024-12-10 11:40:32.815079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.130 [2024-12-10 11:40:32.815174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:07.130 [2024-12-10 11:40:32.815194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:07.130 [2024-12-10 11:40:32.815205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:33:07.130 [2024-12-10 11:40:32.815214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.130 [2024-12-10 11:40:32.829436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:07.130 [2024-12-10 11:40:32.829474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:07.130 [2024-12-10 11:40:32.829486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.228 ms 00:33:07.130 [2024-12-10 11:40:32.829495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.130 [2024-12-10 11:40:32.843689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:07.130 [2024-12-10 11:40:32.843722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:07.130 [2024-12-10 11:40:32.843733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.166 ms 00:33:07.130 [2024-12-10 11:40:32.843742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.130 [2024-12-10 11:40:32.858061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:07.130 [2024-12-10 11:40:32.858094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:07.130 [2024-12-10 11:40:32.858105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.291 ms 00:33:07.130 [2024-12-10 11:40:32.858115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.130 [2024-12-10 11:40:32.872044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:07.130 [2024-12-10 11:40:32.872079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:07.130 [2024-12-10 11:40:32.872091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.858 ms 00:33:07.130 [2024-12-10 11:40:32.872100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.130 [2024-12-10 11:40:32.872149] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:07.130 [2024-12-10 11:40:32.872178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:07.130 [2024-12-10 11:40:32.872190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:07.130 [2024-12-10 11:40:32.872202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:07.130 [2024-12-10 11:40:32.872213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:07.130 [2024-12-10 11:40:32.872224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:07.130 [2024-12-10 11:40:32.872234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:07.130 [2024-12-10 11:40:32.872244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:07.130 [2024-12-10 11:40:32.872255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:07.130 [2024-12-10 11:40:32.872266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:07.130 [2024-12-10 11:40:32.872276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:07.130 [2024-12-10 11:40:32.872287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:07.130 [2024-12-10 11:40:32.872297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:07.130 [2024-12-10 11:40:32.872307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:07.130 [2024-12-10 11:40:32.872317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:07.130 [2024-12-10 11:40:32.872328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:07.130 [2024-12-10 11:40:32.872338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:07.130 [2024-12-10 11:40:32.872348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:07.130 [2024-12-10 11:40:32.872358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:07.130 [2024-12-10 11:40:32.872371] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:07.130 [2024-12-10 11:40:32.872380] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 111c6f31-3cf4-4c72-912f-760942f15cd4 00:33:07.131 [2024-12-10 11:40:32.872391] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:07.131 [2024-12-10 11:40:32.872400] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:33:07.131 [2024-12-10 11:40:32.872410] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:33:07.131 [2024-12-10 11:40:32.872421] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:33:07.131 [2024-12-10 11:40:32.872440] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:07.131 [2024-12-10 11:40:32.872450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:07.131 [2024-12-10 11:40:32.872479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:07.131 [2024-12-10 11:40:32.872488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:07.131 [2024-12-10 11:40:32.872497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:07.131 [2024-12-10 11:40:32.872507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:07.131 [2024-12-10 11:40:32.872517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:07.131 [2024-12-10 11:40:32.872528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.359 ms 00:33:07.131 [2024-12-10 11:40:32.872537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.131 [2024-12-10 11:40:32.891422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:07.131 [2024-12-10 11:40:32.891455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:07.131 [2024-12-10 11:40:32.891473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.884 ms 00:33:07.131 [2024-12-10 11:40:32.891482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.131 [2024-12-10 11:40:32.892091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:07.131 [2024-12-10 11:40:32.892108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:07.131 [2024-12-10 11:40:32.892119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.573 ms 00:33:07.131 [2024-12-10 11:40:32.892129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.131 [2024-12-10 11:40:32.954691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:07.131 [2024-12-10 11:40:32.954732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:07.131 [2024-12-10 11:40:32.954744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:07.131 [2024-12-10 11:40:32.954754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.131 [2024-12-10 11:40:32.954802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:07.131 [2024-12-10 11:40:32.954813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:07.131 [2024-12-10 11:40:32.954823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:07.131 [2024-12-10 11:40:32.954832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.131 [2024-12-10 11:40:32.954916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:07.131 [2024-12-10 11:40:32.954940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:07.131 [2024-12-10 11:40:32.954955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:07.131 [2024-12-10 11:40:32.954965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.131 [2024-12-10 11:40:32.954983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:07.131 [2024-12-10 11:40:32.954994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:07.131 [2024-12-10 11:40:32.955009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:07.131 [2024-12-10 11:40:32.955019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.131 [2024-12-10 11:40:33.071795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:07.131 [2024-12-10 11:40:33.071848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:07.131 [2024-12-10 11:40:33.071867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:07.131 [2024-12-10 11:40:33.071894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.131 [2024-12-10 11:40:33.168579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:07.131 [2024-12-10 11:40:33.168622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:07.131 [2024-12-10 11:40:33.168637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:07.131 [2024-12-10 11:40:33.168648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.131 [2024-12-10 11:40:33.168754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:07.131 [2024-12-10 11:40:33.168766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:07.131 [2024-12-10 11:40:33.168778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:07.131 [2024-12-10 11:40:33.168792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.131 [2024-12-10 11:40:33.168837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:07.131 [2024-12-10 11:40:33.168849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:07.131 [2024-12-10 11:40:33.168859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:07.131 [2024-12-10 11:40:33.168870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.131 [2024-12-10 11:40:33.168990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:07.131 [2024-12-10 11:40:33.169004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:07.131 [2024-12-10 11:40:33.169015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:07.131 [2024-12-10 11:40:33.169026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.131 [2024-12-10 11:40:33.169067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:07.131 [2024-12-10 11:40:33.169080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:07.131 [2024-12-10 11:40:33.169091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:07.131 [2024-12-10 11:40:33.169101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.131 [2024-12-10 11:40:33.169140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:07.131 [2024-12-10 11:40:33.169151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:07.131 [2024-12-10 11:40:33.169161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:07.131 [2024-12-10 11:40:33.169172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.131 [2024-12-10 11:40:33.169224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:07.131 [2024-12-10 11:40:33.169236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:07.131 [2024-12-10 11:40:33.169247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:07.131 [2024-12-10 11:40:33.169257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:07.131 [2024-12-10 11:40:33.169389] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7469.033 ms, result 0 00:33:10.424 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:10.424 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:33:10.424 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:10.424 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:10.424 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:10.424 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84346 00:33:10.424 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:10.424 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:10.424 11:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84346 00:33:10.424 11:40:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84346 ']' 00:33:10.424 11:40:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.424 11:40:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.424 11:40:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.424 11:40:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.424 11:40:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:10.684 [2024-12-10 11:40:37.560357] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:33:10.684 [2024-12-10 11:40:37.560480] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84346 ] 00:33:10.684 [2024-12-10 11:40:37.726503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.943 [2024-12-10 11:40:37.831896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.882 [2024-12-10 11:40:38.738663] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:11.882 [2024-12-10 11:40:38.738728] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:11.882 [2024-12-10 11:40:38.884935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.882 [2024-12-10 11:40:38.884982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:11.882 [2024-12-10 11:40:38.885014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:33:11.882 [2024-12-10 11:40:38.885025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.882 [2024-12-10 11:40:38.885082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.882 [2024-12-10 11:40:38.885095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:11.882 [2024-12-10 11:40:38.885106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:33:11.882 [2024-12-10 11:40:38.885117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.882 [2024-12-10 11:40:38.885146] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:11.882 [2024-12-10 11:40:38.886130] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:11.882 [2024-12-10 11:40:38.886169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.882 [2024-12-10 11:40:38.886180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:11.882 [2024-12-10 11:40:38.886192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.035 ms 00:33:11.882 [2024-12-10 11:40:38.886202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.882 [2024-12-10 11:40:38.887662] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:11.882 [2024-12-10 11:40:38.906141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.882 [2024-12-10 11:40:38.906185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:11.882 [2024-12-10 11:40:38.906206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.509 ms 00:33:11.882 [2024-12-10 11:40:38.906216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.882 [2024-12-10 11:40:38.906299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.882 [2024-12-10 11:40:38.906312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:11.882 [2024-12-10 11:40:38.906323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:33:11.882 [2024-12-10 11:40:38.906334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.882 [2024-12-10 11:40:38.913299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.882 [2024-12-10 11:40:38.913333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:11.882 [2024-12-10 11:40:38.913345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.895 ms 00:33:11.882 [2024-12-10 11:40:38.913355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.882 [2024-12-10 11:40:38.913433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.882 [2024-12-10 11:40:38.913448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:11.882 [2024-12-10 11:40:38.913459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:33:11.882 [2024-12-10 11:40:38.913469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.882 [2024-12-10 11:40:38.913514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.882 [2024-12-10 11:40:38.913529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:11.882 [2024-12-10 11:40:38.913540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:11.882 [2024-12-10 11:40:38.913550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.882 [2024-12-10 11:40:38.913587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:11.882 [2024-12-10 11:40:38.918383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.882 [2024-12-10 11:40:38.918420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:11.882 [2024-12-10 11:40:38.918431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.809 ms 00:33:11.882 [2024-12-10 11:40:38.918445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.882 [2024-12-10 11:40:38.918492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.882 [2024-12-10 11:40:38.918504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:11.882 [2024-12-10 11:40:38.918514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:11.882 [2024-12-10 11:40:38.918525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.882 [2024-12-10 11:40:38.918583] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:11.882 [2024-12-10 11:40:38.918612] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:11.883 [2024-12-10 11:40:38.918647] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:11.883 [2024-12-10 11:40:38.918665] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:11.883 [2024-12-10 11:40:38.918753] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:11.883 [2024-12-10 11:40:38.918766] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:11.883 [2024-12-10 11:40:38.918779] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:11.883 [2024-12-10 11:40:38.918792] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:11.883 [2024-12-10 11:40:38.918804] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:11.883 [2024-12-10 11:40:38.918819] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:11.883 [2024-12-10 11:40:38.918838] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:11.883 [2024-12-10 11:40:38.918849] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:11.883 [2024-12-10 11:40:38.918858] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:11.883 [2024-12-10 11:40:38.918869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.883 [2024-12-10 11:40:38.918879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:11.883 [2024-12-10 11:40:38.918889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:33:11.883 [2024-12-10 11:40:38.918899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.883 [2024-12-10 11:40:38.918989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.883 [2024-12-10 11:40:38.919001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:11.883 [2024-12-10 11:40:38.919015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:33:11.883 [2024-12-10 11:40:38.919025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.883 [2024-12-10 11:40:38.919116] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:11.883 [2024-12-10 11:40:38.919129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:11.883 [2024-12-10 11:40:38.919139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:11.883 [2024-12-10 11:40:38.919149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:11.883 [2024-12-10 11:40:38.919160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:11.883 [2024-12-10 11:40:38.919169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:11.883 [2024-12-10 11:40:38.919179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:11.883 [2024-12-10 11:40:38.919188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:11.883 [2024-12-10 11:40:38.919198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:11.883 [2024-12-10 11:40:38.919207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:11.883 [2024-12-10 11:40:38.919216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:11.883 [2024-12-10 11:40:38.919225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:11.883 [2024-12-10 11:40:38.919235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:11.883 [2024-12-10 11:40:38.919244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:11.883 [2024-12-10 11:40:38.919254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:11.883 [2024-12-10 11:40:38.919263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:11.883 [2024-12-10 11:40:38.919273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:11.883 [2024-12-10 11:40:38.919282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:11.883 [2024-12-10 11:40:38.919292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:11.883 [2024-12-10 11:40:38.919301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:11.883 [2024-12-10 11:40:38.919311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:11.883 [2024-12-10 11:40:38.919320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:11.883 [2024-12-10 11:40:38.919329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:11.883 [2024-12-10 11:40:38.919350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:11.883 [2024-12-10 11:40:38.919360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:11.883 [2024-12-10 11:40:38.919369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:11.883 [2024-12-10 11:40:38.919379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:11.883 [2024-12-10 11:40:38.919388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:11.883 [2024-12-10 11:40:38.919397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:11.883 [2024-12-10 11:40:38.919406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:11.883 [2024-12-10 11:40:38.919416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:11.883 [2024-12-10 11:40:38.919425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:11.883 [2024-12-10 11:40:38.919434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:11.883 [2024-12-10 11:40:38.919443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:11.883 [2024-12-10 11:40:38.919453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:11.883 [2024-12-10 11:40:38.919462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:11.883 [2024-12-10 11:40:38.919471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:11.883 [2024-12-10 11:40:38.919481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:11.883 [2024-12-10 11:40:38.919490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:11.883 [2024-12-10 11:40:38.919499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:11.883 [2024-12-10 11:40:38.919508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:11.883 [2024-12-10 11:40:38.919517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:11.883 [2024-12-10 11:40:38.919526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:11.883 [2024-12-10 11:40:38.919535] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:11.883 [2024-12-10 11:40:38.919545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:11.883 [2024-12-10 11:40:38.919555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:11.883 [2024-12-10 11:40:38.919565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:11.883 [2024-12-10 11:40:38.919578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:11.883 [2024-12-10 11:40:38.919588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:11.883 [2024-12-10 11:40:38.919597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:11.883 [2024-12-10 11:40:38.919607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:11.883 [2024-12-10 11:40:38.919616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:11.883 [2024-12-10 11:40:38.919626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:11.883 [2024-12-10 11:40:38.919637] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:11.883 [2024-12-10 11:40:38.919649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:11.883 [2024-12-10 11:40:38.919661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:11.883 [2024-12-10 11:40:38.919671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:11.883 [2024-12-10 11:40:38.919682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:11.883 [2024-12-10 11:40:38.919693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:11.883 [2024-12-10 11:40:38.919703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:11.883 [2024-12-10 11:40:38.919714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:11.883 [2024-12-10 11:40:38.919724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:11.883 [2024-12-10 11:40:38.919735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:11.883 [2024-12-10 11:40:38.919745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:11.883 [2024-12-10 11:40:38.919756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:11.883 [2024-12-10 11:40:38.919766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:11.883 [2024-12-10 11:40:38.919777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:11.883 [2024-12-10 11:40:38.919787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:11.883 [2024-12-10 11:40:38.919798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:11.883 [2024-12-10 11:40:38.919808] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:11.883 [2024-12-10 11:40:38.919819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:11.883 [2024-12-10 11:40:38.919830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:11.883 [2024-12-10 11:40:38.919840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:11.883 [2024-12-10 11:40:38.919850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:11.883 [2024-12-10 11:40:38.919861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:11.883 [2024-12-10 11:40:38.919871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.883 [2024-12-10 11:40:38.919882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:11.883 [2024-12-10 11:40:38.919892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.811 ms 00:33:11.883 [2024-12-10 11:40:38.919902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.883 [2024-12-10 11:40:38.919961] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:11.884 [2024-12-10 11:40:38.919975] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:16.080 [2024-12-10 11:40:42.541993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.080 [2024-12-10 11:40:42.542052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:16.080 [2024-12-10 11:40:42.542086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3627.912 ms 00:33:16.080 [2024-12-10 11:40:42.542096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.080 [2024-12-10 11:40:42.580021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.080 [2024-12-10 11:40:42.580073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:16.080 [2024-12-10 11:40:42.580089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.617 ms 00:33:16.080 [2024-12-10 11:40:42.580100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.080 [2024-12-10 11:40:42.580222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.080 [2024-12-10 11:40:42.580241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:16.080 [2024-12-10 11:40:42.580253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:33:16.081 [2024-12-10 11:40:42.580263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.626428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.626476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:16.081 [2024-12-10 11:40:42.626493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.177 ms 00:33:16.081 [2024-12-10 11:40:42.626504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.626566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.626578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:16.081 [2024-12-10 11:40:42.626589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:16.081 [2024-12-10 11:40:42.626599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.627117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.627141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:16.081 [2024-12-10 11:40:42.627152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.459 ms 00:33:16.081 [2024-12-10 11:40:42.627163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.627208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.627219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:16.081 [2024-12-10 11:40:42.627230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:33:16.081 [2024-12-10 11:40:42.627241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.647865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.647908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:16.081 [2024-12-10 11:40:42.647937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.633 ms 00:33:16.081 [2024-12-10 11:40:42.647965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.676390] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:16.081 [2024-12-10 11:40:42.676431] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:16.081 [2024-12-10 11:40:42.676463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.676474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:33:16.081 [2024-12-10 11:40:42.676485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.425 ms 00:33:16.081 [2024-12-10 11:40:42.676495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.695268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.695321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:33:16.081 [2024-12-10 11:40:42.695335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.754 ms 00:33:16.081 [2024-12-10 11:40:42.695345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.712374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.712408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:33:16.081 [2024-12-10 11:40:42.712420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.994 ms 00:33:16.081 [2024-12-10 11:40:42.712429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.729657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.729695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:33:16.081 [2024-12-10 11:40:42.729724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.199 ms 00:33:16.081 [2024-12-10 11:40:42.729734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.730484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.730517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:16.081 [2024-12-10 11:40:42.730530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.648 ms 00:33:16.081 [2024-12-10 11:40:42.730541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.812431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.812492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:16.081 [2024-12-10 11:40:42.812507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 81.998 ms 00:33:16.081 [2024-12-10 11:40:42.812534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.822671] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:16.081 [2024-12-10 11:40:42.823352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.823382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:16.081 [2024-12-10 11:40:42.823395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.783 ms 00:33:16.081 [2024-12-10 11:40:42.823407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.823502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.823519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:33:16.081 [2024-12-10 11:40:42.823531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:16.081 [2024-12-10 11:40:42.823541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.823607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.823620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:16.081 [2024-12-10 11:40:42.823631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:33:16.081 [2024-12-10 11:40:42.823641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.823665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.823676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:16.081 [2024-12-10 11:40:42.823690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:16.081 [2024-12-10 11:40:42.823700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.823738] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:16.081 [2024-12-10 11:40:42.823750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.823760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:16.081 [2024-12-10 11:40:42.823771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:16.081 [2024-12-10 11:40:42.823781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.857800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.857841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:16.081 [2024-12-10 11:40:42.857872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.051 ms 00:33:16.081 [2024-12-10 11:40:42.857883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.857985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.081 [2024-12-10 11:40:42.857998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:16.081 [2024-12-10 11:40:42.858009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:33:16.081 [2024-12-10 11:40:42.858021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.081 [2024-12-10 11:40:42.859180] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3980.236 ms, result 0 00:33:16.081 [2024-12-10 11:40:42.874174] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:16.081 [2024-12-10 11:40:42.890181] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:16.081 [2024-12-10 11:40:42.898786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:16.649 11:40:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:16.649 11:40:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:16.649 11:40:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:16.649 11:40:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:16.649 11:40:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:16.908 [2024-12-10 11:40:43.770126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.908 [2024-12-10 11:40:43.770173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:16.908 [2024-12-10 11:40:43.770192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:16.908 [2024-12-10 11:40:43.770219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.908 [2024-12-10 11:40:43.770244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.908 [2024-12-10 11:40:43.770256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:16.908 [2024-12-10 11:40:43.770267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:16.908 [2024-12-10 11:40:43.770277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.908 [2024-12-10 11:40:43.770297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.908 [2024-12-10 11:40:43.770309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:16.908 [2024-12-10 11:40:43.770319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:16.908 [2024-12-10 11:40:43.770329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.908 [2024-12-10 11:40:43.770388] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.256 ms, result 0 00:33:16.908 true 00:33:16.908 11:40:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:16.908 { 00:33:16.908 "name": "ftl", 00:33:16.908 "properties": [ 00:33:16.908 { 00:33:16.908 "name": "superblock_version", 00:33:16.908 "value": 5, 00:33:16.908 "read-only": true 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "name": "base_device", 00:33:16.908 "bands": [ 00:33:16.908 { 00:33:16.908 "id": 0, 00:33:16.908 "state": "CLOSED", 00:33:16.908 "validity": 1.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 1, 00:33:16.908 "state": "CLOSED", 00:33:16.908 "validity": 1.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 2, 00:33:16.908 "state": "CLOSED", 00:33:16.908 "validity": 0.007843137254901933 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 3, 00:33:16.908 "state": "FREE", 00:33:16.908 "validity": 0.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 4, 00:33:16.908 "state": "FREE", 00:33:16.908 "validity": 0.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 5, 00:33:16.908 "state": "FREE", 00:33:16.908 "validity": 0.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 6, 00:33:16.908 "state": "FREE", 00:33:16.908 "validity": 0.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 7, 00:33:16.908 "state": "FREE", 00:33:16.908 "validity": 0.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 8, 00:33:16.908 "state": "FREE", 00:33:16.908 "validity": 0.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 9, 00:33:16.908 "state": "FREE", 00:33:16.908 "validity": 0.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 10, 00:33:16.908 "state": "FREE", 00:33:16.908 "validity": 0.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 11, 00:33:16.908 "state": "FREE", 00:33:16.908 "validity": 0.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 12, 00:33:16.908 "state": "FREE", 00:33:16.908 "validity": 0.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 13, 00:33:16.908 "state": "FREE", 00:33:16.908 "validity": 0.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 14, 00:33:16.908 "state": "FREE", 00:33:16.908 "validity": 0.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 15, 00:33:16.908 "state": "FREE", 00:33:16.908 "validity": 0.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 16, 00:33:16.908 "state": "FREE", 00:33:16.908 "validity": 0.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 17, 00:33:16.908 "state": "FREE", 00:33:16.908 "validity": 0.0 00:33:16.908 } 00:33:16.908 ], 00:33:16.908 "read-only": true 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "name": "cache_device", 00:33:16.908 "type": "bdev", 00:33:16.908 "chunks": [ 00:33:16.908 { 00:33:16.908 "id": 0, 00:33:16.908 "state": "INACTIVE", 00:33:16.908 "utilization": 0.0 00:33:16.908 }, 00:33:16.908 { 00:33:16.908 "id": 1, 00:33:16.908 "state": "OPEN", 00:33:16.909 "utilization": 0.0 00:33:16.909 }, 00:33:16.909 { 00:33:16.909 "id": 2, 00:33:16.909 "state": "OPEN", 00:33:16.909 "utilization": 0.0 00:33:16.909 }, 00:33:16.909 { 00:33:16.909 "id": 3, 00:33:16.909 "state": "FREE", 00:33:16.909 "utilization": 0.0 00:33:16.909 }, 00:33:16.909 { 00:33:16.909 "id": 4, 00:33:16.909 "state": "FREE", 00:33:16.909 "utilization": 0.0 00:33:16.909 } 00:33:16.909 ], 00:33:16.909 "read-only": true 00:33:16.909 }, 00:33:16.909 { 00:33:16.909 "name": "verbose_mode", 00:33:16.909 "value": true, 00:33:16.909 "unit": "", 00:33:16.909 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:16.909 }, 00:33:16.909 { 00:33:16.909 "name": "prep_upgrade_on_shutdown", 00:33:16.909 "value": false, 00:33:16.909 "unit": "", 00:33:16.909 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:16.909 } 00:33:16.909 ] 00:33:16.909 } 00:33:16.909 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:16.909 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:33:16.909 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:17.168 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:33:17.168 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:33:17.168 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:33:17.168 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:17.168 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:33:17.428 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:33:17.428 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:33:17.428 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:33:17.428 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:17.428 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:17.428 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:17.428 Validate MD5 checksum, iteration 1 00:33:17.428 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:17.428 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:17.428 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:17.428 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:17.428 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:17.428 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:17.428 11:40:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:17.428 [2024-12-10 11:40:44.500768] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:33:17.428 [2024-12-10 11:40:44.500883] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84451 ] 00:33:17.688 [2024-12-10 11:40:44.685837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.688 [2024-12-10 11:40:44.792917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.594  [2024-12-10T11:40:47.276Z] Copying: 650/1024 [MB] (650 MBps) [2024-12-10T11:40:48.655Z] Copying: 1024/1024 [MB] (average 646 MBps) 00:33:21.541 00:33:21.800 11:40:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:21.800 11:40:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:23.705 11:40:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:23.705 11:40:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=781888b064a719c07bd4599fbe150746 00:33:23.705 11:40:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 781888b064a719c07bd4599fbe150746 != \7\8\1\8\8\8\b\0\6\4\a\7\1\9\c\0\7\b\d\4\5\9\9\f\b\e\1\5\0\7\4\6 ]] 00:33:23.705 11:40:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:23.705 11:40:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:23.705 Validate MD5 checksum, iteration 2 00:33:23.705 11:40:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:23.705 11:40:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:23.705 11:40:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:23.705 11:40:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:23.705 11:40:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:23.705 11:40:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:23.705 11:40:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:23.705 [2024-12-10 11:40:50.422040] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:33:23.705 [2024-12-10 11:40:50.422603] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84518 ] 00:33:23.705 [2024-12-10 11:40:50.601963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.705 [2024-12-10 11:40:50.726216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.612  [2024-12-10T11:40:53.294Z] Copying: 639/1024 [MB] (639 MBps) [2024-12-10T11:40:54.673Z] Copying: 1024/1024 [MB] (average 642 MBps) 00:33:27.559 00:33:27.559 11:40:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:27.559 11:40:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6780fdae5a80b0b6a2d4c5a8281521a2 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6780fdae5a80b0b6a2d4c5a8281521a2 != \6\7\8\0\f\d\a\e\5\a\8\0\b\0\b\6\a\2\d\4\c\5\a\8\2\8\1\5\2\1\a\2 ]] 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84346 ]] 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84346 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84588 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84588 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84588 ']' 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:29.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:29.465 11:40:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:29.465 [2024-12-10 11:40:56.161489] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:33:29.466 [2024-12-10 11:40:56.161615] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84588 ] 00:33:29.466 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84346 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:33:29.466 [2024-12-10 11:40:56.340083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.466 [2024-12-10 11:40:56.466560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.404 [2024-12-10 11:40:57.497881] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:30.404 [2024-12-10 11:40:57.497967] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:30.664 [2024-12-10 11:40:57.645226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.664 [2024-12-10 11:40:57.645275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:30.664 [2024-12-10 11:40:57.645292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:30.664 [2024-12-10 11:40:57.645303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.664 [2024-12-10 11:40:57.645363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.664 [2024-12-10 11:40:57.645375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:30.664 [2024-12-10 11:40:57.645387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:33:30.664 [2024-12-10 11:40:57.645397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.664 [2024-12-10 11:40:57.645427] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:30.664 [2024-12-10 11:40:57.646420] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:30.664 [2024-12-10 11:40:57.646451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.664 [2024-12-10 11:40:57.646463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:30.664 [2024-12-10 11:40:57.646474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.037 ms 00:33:30.664 [2024-12-10 11:40:57.646485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.664 [2024-12-10 11:40:57.646852] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:30.664 [2024-12-10 11:40:57.671816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.664 [2024-12-10 11:40:57.671857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:30.664 [2024-12-10 11:40:57.671872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.005 ms 00:33:30.664 [2024-12-10 11:40:57.671883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.664 [2024-12-10 11:40:57.685521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.664 [2024-12-10 11:40:57.685569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:30.664 [2024-12-10 11:40:57.685581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:33:30.664 [2024-12-10 11:40:57.685591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.664 [2024-12-10 11:40:57.686079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.664 [2024-12-10 11:40:57.686103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:30.665 [2024-12-10 11:40:57.686114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.408 ms 00:33:30.665 [2024-12-10 11:40:57.686124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.665 [2024-12-10 11:40:57.686191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.665 [2024-12-10 11:40:57.686205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:30.665 [2024-12-10 11:40:57.686216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:33:30.665 [2024-12-10 11:40:57.686227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.665 [2024-12-10 11:40:57.686256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.665 [2024-12-10 11:40:57.686267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:30.665 [2024-12-10 11:40:57.686277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:30.665 [2024-12-10 11:40:57.686287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.665 [2024-12-10 11:40:57.686311] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:30.665 [2024-12-10 11:40:57.690176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.665 [2024-12-10 11:40:57.690209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:30.665 [2024-12-10 11:40:57.690221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.876 ms 00:33:30.665 [2024-12-10 11:40:57.690247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.665 [2024-12-10 11:40:57.690284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.665 [2024-12-10 11:40:57.690295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:30.665 [2024-12-10 11:40:57.690306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:30.665 [2024-12-10 11:40:57.690316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.665 [2024-12-10 11:40:57.690354] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:30.665 [2024-12-10 11:40:57.690381] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:30.665 [2024-12-10 11:40:57.690418] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:30.665 [2024-12-10 11:40:57.690440] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:30.665 [2024-12-10 11:40:57.690529] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:30.665 [2024-12-10 11:40:57.690543] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:30.665 [2024-12-10 11:40:57.690556] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:30.665 [2024-12-10 11:40:57.690569] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:30.665 [2024-12-10 11:40:57.690583] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:30.665 [2024-12-10 11:40:57.690594] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:30.665 [2024-12-10 11:40:57.690604] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:30.665 [2024-12-10 11:40:57.690614] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:30.665 [2024-12-10 11:40:57.690623] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:30.665 [2024-12-10 11:40:57.690638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.665 [2024-12-10 11:40:57.690648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:30.665 [2024-12-10 11:40:57.690659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.288 ms 00:33:30.665 [2024-12-10 11:40:57.690668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.665 [2024-12-10 11:40:57.690738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.665 [2024-12-10 11:40:57.690749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:30.665 [2024-12-10 11:40:57.690760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:33:30.665 [2024-12-10 11:40:57.690769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.665 [2024-12-10 11:40:57.690857] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:30.665 [2024-12-10 11:40:57.690874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:30.665 [2024-12-10 11:40:57.690885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:30.665 [2024-12-10 11:40:57.690896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:30.665 [2024-12-10 11:40:57.690907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:30.665 [2024-12-10 11:40:57.690916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:30.665 [2024-12-10 11:40:57.690926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:30.665 [2024-12-10 11:40:57.690954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:30.665 [2024-12-10 11:40:57.690964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:30.665 [2024-12-10 11:40:57.690972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:30.665 [2024-12-10 11:40:57.690987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:30.665 [2024-12-10 11:40:57.690996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:30.665 [2024-12-10 11:40:57.691005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:30.665 [2024-12-10 11:40:57.691015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:30.665 [2024-12-10 11:40:57.691024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:30.665 [2024-12-10 11:40:57.691033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:30.665 [2024-12-10 11:40:57.691043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:30.665 [2024-12-10 11:40:57.691054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:30.665 [2024-12-10 11:40:57.691063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:30.665 [2024-12-10 11:40:57.691072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:30.665 [2024-12-10 11:40:57.691082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:30.665 [2024-12-10 11:40:57.691102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:30.665 [2024-12-10 11:40:57.691112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:30.665 [2024-12-10 11:40:57.691121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:30.665 [2024-12-10 11:40:57.691130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:30.665 [2024-12-10 11:40:57.691139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:30.665 [2024-12-10 11:40:57.691148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:30.665 [2024-12-10 11:40:57.691157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:30.665 [2024-12-10 11:40:57.691166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:30.665 [2024-12-10 11:40:57.691175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:30.665 [2024-12-10 11:40:57.691185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:30.665 [2024-12-10 11:40:57.691194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:30.665 [2024-12-10 11:40:57.691203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:30.665 [2024-12-10 11:40:57.691212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:30.665 [2024-12-10 11:40:57.691221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:30.665 [2024-12-10 11:40:57.691230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:30.665 [2024-12-10 11:40:57.691239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:30.665 [2024-12-10 11:40:57.691248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:30.665 [2024-12-10 11:40:57.691257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:30.665 [2024-12-10 11:40:57.691266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:30.665 [2024-12-10 11:40:57.691275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:30.665 [2024-12-10 11:40:57.691283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:30.665 [2024-12-10 11:40:57.691295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:30.665 [2024-12-10 11:40:57.691304] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:30.665 [2024-12-10 11:40:57.691314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:30.665 [2024-12-10 11:40:57.691324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:30.665 [2024-12-10 11:40:57.691334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:30.665 [2024-12-10 11:40:57.691344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:30.665 [2024-12-10 11:40:57.691354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:30.665 [2024-12-10 11:40:57.691364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:30.665 [2024-12-10 11:40:57.691373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:30.665 [2024-12-10 11:40:57.691383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:30.665 [2024-12-10 11:40:57.691392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:30.665 [2024-12-10 11:40:57.691403] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:30.665 [2024-12-10 11:40:57.691416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:30.665 [2024-12-10 11:40:57.691427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:30.665 [2024-12-10 11:40:57.691438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:30.665 [2024-12-10 11:40:57.691448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:30.665 [2024-12-10 11:40:57.691458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:30.665 [2024-12-10 11:40:57.691468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:30.665 [2024-12-10 11:40:57.691478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:30.665 [2024-12-10 11:40:57.691488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:30.665 [2024-12-10 11:40:57.691500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:30.665 [2024-12-10 11:40:57.691510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:30.665 [2024-12-10 11:40:57.691520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:30.666 [2024-12-10 11:40:57.691531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:30.666 [2024-12-10 11:40:57.691542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:30.666 [2024-12-10 11:40:57.691551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:30.666 [2024-12-10 11:40:57.691561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:30.666 [2024-12-10 11:40:57.691572] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:30.666 [2024-12-10 11:40:57.691583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:30.666 [2024-12-10 11:40:57.691598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:30.666 [2024-12-10 11:40:57.691609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:30.666 [2024-12-10 11:40:57.691619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:30.666 [2024-12-10 11:40:57.691635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:30.666 [2024-12-10 11:40:57.691645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.666 [2024-12-10 11:40:57.691656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:30.666 [2024-12-10 11:40:57.691666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.842 ms 00:33:30.666 [2024-12-10 11:40:57.691676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.666 [2024-12-10 11:40:57.733319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.666 [2024-12-10 11:40:57.733357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:30.666 [2024-12-10 11:40:57.733372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.661 ms 00:33:30.666 [2024-12-10 11:40:57.733399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.666 [2024-12-10 11:40:57.733442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.666 [2024-12-10 11:40:57.733453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:30.666 [2024-12-10 11:40:57.733465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:30.666 [2024-12-10 11:40:57.733476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.925 [2024-12-10 11:40:57.784257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.926 [2024-12-10 11:40:57.784296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:30.926 [2024-12-10 11:40:57.784310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.801 ms 00:33:30.926 [2024-12-10 11:40:57.784321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.926 [2024-12-10 11:40:57.784361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.926 [2024-12-10 11:40:57.784373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:30.926 [2024-12-10 11:40:57.784384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:30.926 [2024-12-10 11:40:57.784399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.926 [2024-12-10 11:40:57.784534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.926 [2024-12-10 11:40:57.784548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:30.926 [2024-12-10 11:40:57.784559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:33:30.926 [2024-12-10 11:40:57.784569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.926 [2024-12-10 11:40:57.784616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.926 [2024-12-10 11:40:57.784643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:30.926 [2024-12-10 11:40:57.784655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:33:30.926 [2024-12-10 11:40:57.784666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.926 [2024-12-10 11:40:57.809609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.926 [2024-12-10 11:40:57.809647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:30.926 [2024-12-10 11:40:57.809661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.954 ms 00:33:30.926 [2024-12-10 11:40:57.809693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.926 [2024-12-10 11:40:57.809833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.926 [2024-12-10 11:40:57.809851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:33:30.926 [2024-12-10 11:40:57.809862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:33:30.926 [2024-12-10 11:40:57.809872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.926 [2024-12-10 11:40:57.860529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.926 [2024-12-10 11:40:57.860572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:33:30.926 [2024-12-10 11:40:57.860587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.716 ms 00:33:30.926 [2024-12-10 11:40:57.860600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.926 [2024-12-10 11:40:57.874402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.926 [2024-12-10 11:40:57.874451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:30.926 [2024-12-10 11:40:57.874493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.610 ms 00:33:30.926 [2024-12-10 11:40:57.874505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.926 [2024-12-10 11:40:57.964849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.926 [2024-12-10 11:40:57.964909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:30.926 [2024-12-10 11:40:57.964951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 90.423 ms 00:33:30.926 [2024-12-10 11:40:57.964962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.926 [2024-12-10 11:40:57.965199] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:33:30.926 [2024-12-10 11:40:57.965380] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:33:30.926 [2024-12-10 11:40:57.965551] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:33:30.926 [2024-12-10 11:40:57.965727] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:33:30.926 [2024-12-10 11:40:57.965742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.926 [2024-12-10 11:40:57.965754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:33:30.926 [2024-12-10 11:40:57.965767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.729 ms 00:33:30.926 [2024-12-10 11:40:57.965778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.926 [2024-12-10 11:40:57.965843] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:33:30.926 [2024-12-10 11:40:57.965860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.926 [2024-12-10 11:40:57.965877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:33:30.926 [2024-12-10 11:40:57.965889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:33:30.926 [2024-12-10 11:40:57.965899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.926 [2024-12-10 11:40:57.986885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.926 [2024-12-10 11:40:57.986937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:33:30.926 [2024-12-10 11:40:57.986951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.973 ms 00:33:30.926 [2024-12-10 11:40:57.986962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.926 [2024-12-10 11:40:58.000471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.926 [2024-12-10 11:40:58.000510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:33:30.926 [2024-12-10 11:40:58.000523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:33:30.926 [2024-12-10 11:40:58.000535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.926 [2024-12-10 11:40:58.000658] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:33:30.926 [2024-12-10 11:40:58.001000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.926 [2024-12-10 11:40:58.001018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:30.926 [2024-12-10 11:40:58.001032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.345 ms 00:33:30.926 [2024-12-10 11:40:58.001043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.494 [2024-12-10 11:40:58.601067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.494 [2024-12-10 11:40:58.601139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:31.494 [2024-12-10 11:40:58.601158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 599.868 ms 00:33:31.494 [2024-12-10 11:40:58.601170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.494 [2024-12-10 11:40:58.606818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.494 [2024-12-10 11:40:58.606865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:31.494 [2024-12-10 11:40:58.606879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.005 ms 00:33:31.494 [2024-12-10 11:40:58.606890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.783 [2024-12-10 11:40:58.607419] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:33:31.783 [2024-12-10 11:40:58.607460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.783 [2024-12-10 11:40:58.607473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:31.783 [2024-12-10 11:40:58.607486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.518 ms 00:33:31.783 [2024-12-10 11:40:58.607497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.783 [2024-12-10 11:40:58.607533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.783 [2024-12-10 11:40:58.607546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:31.783 [2024-12-10 11:40:58.607557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:31.783 [2024-12-10 11:40:58.607573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:31.783 [2024-12-10 11:40:58.607611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 607.940 ms, result 0 00:33:31.783 [2024-12-10 11:40:58.607657] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:33:31.783 [2024-12-10 11:40:58.607740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:31.783 [2024-12-10 11:40:58.607752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:31.783 [2024-12-10 11:40:58.607762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:33:31.783 [2024-12-10 11:40:58.607771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.358 [2024-12-10 11:40:59.205388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.358 [2024-12-10 11:40:59.205457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:32.358 [2024-12-10 11:40:59.205489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 597.303 ms 00:33:32.358 [2024-12-10 11:40:59.205501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.358 [2024-12-10 11:40:59.211223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.358 [2024-12-10 11:40:59.211268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:32.358 [2024-12-10 11:40:59.211281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.193 ms 00:33:32.358 [2024-12-10 11:40:59.211291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.358 [2024-12-10 11:40:59.211821] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:33:32.358 [2024-12-10 11:40:59.211851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.358 [2024-12-10 11:40:59.211862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:32.358 [2024-12-10 11:40:59.211874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.531 ms 00:33:32.358 [2024-12-10 11:40:59.211885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.358 [2024-12-10 11:40:59.211932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.358 [2024-12-10 11:40:59.211945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:32.358 [2024-12-10 11:40:59.211956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:32.358 [2024-12-10 11:40:59.211966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.358 [2024-12-10 11:40:59.212005] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 605.328 ms, result 0 00:33:32.358 [2024-12-10 11:40:59.212049] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:32.358 [2024-12-10 11:40:59.212063] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:32.358 [2024-12-10 11:40:59.212077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.358 [2024-12-10 11:40:59.212088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:33:32.358 [2024-12-10 11:40:59.212098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1213.412 ms 00:33:32.358 [2024-12-10 11:40:59.212109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.358 [2024-12-10 11:40:59.212143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.359 [2024-12-10 11:40:59.212160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:33:32.359 [2024-12-10 11:40:59.212170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:32.359 [2024-12-10 11:40:59.212181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.359 [2024-12-10 11:40:59.223069] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:32.359 [2024-12-10 11:40:59.223224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.359 [2024-12-10 11:40:59.223238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:32.359 [2024-12-10 11:40:59.223251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.044 ms 00:33:32.359 [2024-12-10 11:40:59.223261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.359 [2024-12-10 11:40:59.223856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.359 [2024-12-10 11:40:59.223889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:33:32.359 [2024-12-10 11:40:59.223905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.511 ms 00:33:32.359 [2024-12-10 11:40:59.223937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.359 [2024-12-10 11:40:59.225971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.359 [2024-12-10 11:40:59.225998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:33:32.359 [2024-12-10 11:40:59.226010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.015 ms 00:33:32.359 [2024-12-10 11:40:59.226020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.359 [2024-12-10 11:40:59.226063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.359 [2024-12-10 11:40:59.226075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:33:32.359 [2024-12-10 11:40:59.226101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:32.359 [2024-12-10 11:40:59.226118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.359 [2024-12-10 11:40:59.226220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.359 [2024-12-10 11:40:59.226233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:32.359 [2024-12-10 11:40:59.226244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:33:32.359 [2024-12-10 11:40:59.226254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.359 [2024-12-10 11:40:59.226277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.359 [2024-12-10 11:40:59.226288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:32.359 [2024-12-10 11:40:59.226299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:32.359 [2024-12-10 11:40:59.226309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.359 [2024-12-10 11:40:59.226350] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:32.359 [2024-12-10 11:40:59.226362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.359 [2024-12-10 11:40:59.226372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:32.359 [2024-12-10 11:40:59.226382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:32.359 [2024-12-10 11:40:59.226392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.359 [2024-12-10 11:40:59.226440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.359 [2024-12-10 11:40:59.226452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:32.359 [2024-12-10 11:40:59.226462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:33:32.359 [2024-12-10 11:40:59.226472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.359 [2024-12-10 11:40:59.227689] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1584.359 ms, result 0 00:33:32.359 [2024-12-10 11:40:59.240023] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.359 [2024-12-10 11:40:59.256006] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:32.359 [2024-12-10 11:40:59.265191] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:32.359 11:40:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.359 11:40:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:32.359 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:32.359 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:32.359 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:33:32.359 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:32.359 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:32.359 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:32.359 Validate MD5 checksum, iteration 1 00:33:32.359 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:32.359 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:32.359 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:32.359 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:32.359 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:32.359 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:32.359 11:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:32.359 [2024-12-10 11:40:59.407064] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:33:32.359 [2024-12-10 11:40:59.407191] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84627 ] 00:33:32.618 [2024-12-10 11:40:59.585070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.618 [2024-12-10 11:40:59.690855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.525  [2024-12-10T11:41:02.207Z] Copying: 610/1024 [MB] (610 MBps) [2024-12-10T11:41:03.587Z] Copying: 1024/1024 [MB] (average 605 MBps) 00:33:36.473 00:33:36.732 11:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:36.732 11:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:38.642 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:38.642 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=781888b064a719c07bd4599fbe150746 00:33:38.642 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 781888b064a719c07bd4599fbe150746 != \7\8\1\8\8\8\b\0\6\4\a\7\1\9\c\0\7\b\d\4\5\9\9\f\b\e\1\5\0\7\4\6 ]] 00:33:38.642 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:38.642 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:38.642 Validate MD5 checksum, iteration 2 00:33:38.642 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:38.642 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:38.642 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:38.642 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:38.642 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:38.642 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:38.642 11:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:38.642 [2024-12-10 11:41:05.341670] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:33:38.642 [2024-12-10 11:41:05.341798] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84695 ] 00:33:38.642 [2024-12-10 11:41:05.518121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.642 [2024-12-10 11:41:05.626336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.547  [2024-12-10T11:41:08.229Z] Copying: 613/1024 [MB] (613 MBps) [2024-12-10T11:41:09.168Z] Copying: 1024/1024 [MB] (average 610 MBps) 00:33:42.054 00:33:42.054 11:41:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:42.054 11:41:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6780fdae5a80b0b6a2d4c5a8281521a2 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6780fdae5a80b0b6a2d4c5a8281521a2 != \6\7\8\0\f\d\a\e\5\a\8\0\b\0\b\6\a\2\d\4\c\5\a\8\2\8\1\5\2\1\a\2 ]] 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84588 ]] 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84588 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84588 ']' 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84588 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84588 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:43.965 killing process with pid 84588 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84588' 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84588 00:33:43.965 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84588 00:33:45.343 [2024-12-10 11:41:12.135478] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:45.343 [2024-12-10 11:41:12.154474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:45.343 [2024-12-10 11:41:12.154520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:45.343 [2024-12-10 11:41:12.154539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:45.343 [2024-12-10 11:41:12.154551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.343 [2024-12-10 11:41:12.154578] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:45.343 [2024-12-10 11:41:12.159149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:45.343 [2024-12-10 11:41:12.159185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:45.343 [2024-12-10 11:41:12.159198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.561 ms 00:33:45.343 [2024-12-10 11:41:12.159224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.343 [2024-12-10 11:41:12.159451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:45.343 [2024-12-10 11:41:12.159465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:45.343 [2024-12-10 11:41:12.159476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.200 ms 00:33:45.343 [2024-12-10 11:41:12.159488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.343 [2024-12-10 11:41:12.160675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:45.343 [2024-12-10 11:41:12.160710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:45.343 [2024-12-10 11:41:12.160723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.170 ms 00:33:45.343 [2024-12-10 11:41:12.160739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.343 [2024-12-10 11:41:12.161687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:45.343 [2024-12-10 11:41:12.161718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:45.343 [2024-12-10 11:41:12.161730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.914 ms 00:33:45.343 [2024-12-10 11:41:12.161740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.343 [2024-12-10 11:41:12.175846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:45.343 [2024-12-10 11:41:12.175885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:45.343 [2024-12-10 11:41:12.175905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.088 ms 00:33:45.344 [2024-12-10 11:41:12.175923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.344 [2024-12-10 11:41:12.183841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:45.344 [2024-12-10 11:41:12.183880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:45.344 [2024-12-10 11:41:12.183893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.892 ms 00:33:45.344 [2024-12-10 11:41:12.183903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.344 [2024-12-10 11:41:12.183998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:45.344 [2024-12-10 11:41:12.184012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:45.344 [2024-12-10 11:41:12.184023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:33:45.344 [2024-12-10 11:41:12.184039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.344 [2024-12-10 11:41:12.197973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:45.344 [2024-12-10 11:41:12.198008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:45.344 [2024-12-10 11:41:12.198019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.939 ms 00:33:45.344 [2024-12-10 11:41:12.198044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.344 [2024-12-10 11:41:12.212400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:45.344 [2024-12-10 11:41:12.212436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:45.344 [2024-12-10 11:41:12.212448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.343 ms 00:33:45.344 [2024-12-10 11:41:12.212473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.344 [2024-12-10 11:41:12.225987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:45.344 [2024-12-10 11:41:12.226021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:45.344 [2024-12-10 11:41:12.226033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.499 ms 00:33:45.344 [2024-12-10 11:41:12.226059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.344 [2024-12-10 11:41:12.239633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:45.344 [2024-12-10 11:41:12.239669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:45.344 [2024-12-10 11:41:12.239681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.530 ms 00:33:45.344 [2024-12-10 11:41:12.239690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.344 [2024-12-10 11:41:12.239725] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:45.344 [2024-12-10 11:41:12.239741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:45.344 [2024-12-10 11:41:12.239753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:45.344 [2024-12-10 11:41:12.239764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:45.344 [2024-12-10 11:41:12.239774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:45.344 [2024-12-10 11:41:12.239786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:45.344 [2024-12-10 11:41:12.239797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:45.344 [2024-12-10 11:41:12.239809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:45.344 [2024-12-10 11:41:12.239819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:45.344 [2024-12-10 11:41:12.239829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:45.344 [2024-12-10 11:41:12.239840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:45.344 [2024-12-10 11:41:12.239850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:45.344 [2024-12-10 11:41:12.239860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:45.344 [2024-12-10 11:41:12.239869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:45.344 [2024-12-10 11:41:12.239880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:45.344 [2024-12-10 11:41:12.239890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:45.344 [2024-12-10 11:41:12.239900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:45.344 [2024-12-10 11:41:12.239910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:45.344 [2024-12-10 11:41:12.239928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:45.344 [2024-12-10 11:41:12.239940] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:45.344 [2024-12-10 11:41:12.239950] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 111c6f31-3cf4-4c72-912f-760942f15cd4 00:33:45.344 [2024-12-10 11:41:12.239961] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:45.344 [2024-12-10 11:41:12.239971] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:33:45.344 [2024-12-10 11:41:12.239980] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:33:45.344 [2024-12-10 11:41:12.239990] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:33:45.344 [2024-12-10 11:41:12.239999] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:45.344 [2024-12-10 11:41:12.240009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:45.344 [2024-12-10 11:41:12.240025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:45.344 [2024-12-10 11:41:12.240033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:45.344 [2024-12-10 11:41:12.240057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:45.344 [2024-12-10 11:41:12.240068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:45.344 [2024-12-10 11:41:12.240084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:45.344 [2024-12-10 11:41:12.240095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.345 ms 00:33:45.344 [2024-12-10 11:41:12.240105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.344 [2024-12-10 11:41:12.261032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:45.344 [2024-12-10 11:41:12.261065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:45.344 [2024-12-10 11:41:12.261078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.930 ms 00:33:45.344 [2024-12-10 11:41:12.261089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.344 [2024-12-10 11:41:12.261700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:45.344 [2024-12-10 11:41:12.261717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:45.344 [2024-12-10 11:41:12.261729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.582 ms 00:33:45.344 [2024-12-10 11:41:12.261740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.344 [2024-12-10 11:41:12.327162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:45.344 [2024-12-10 11:41:12.327198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:45.344 [2024-12-10 11:41:12.327211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:45.344 [2024-12-10 11:41:12.327227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.344 [2024-12-10 11:41:12.327262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:45.344 [2024-12-10 11:41:12.327274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:45.344 [2024-12-10 11:41:12.327285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:45.344 [2024-12-10 11:41:12.327295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.344 [2024-12-10 11:41:12.327372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:45.344 [2024-12-10 11:41:12.327386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:45.344 [2024-12-10 11:41:12.327398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:45.344 [2024-12-10 11:41:12.327409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.344 [2024-12-10 11:41:12.327433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:45.344 [2024-12-10 11:41:12.327444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:45.344 [2024-12-10 11:41:12.327454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:45.344 [2024-12-10 11:41:12.327464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.344 [2024-12-10 11:41:12.454443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:45.344 [2024-12-10 11:41:12.454500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:45.344 [2024-12-10 11:41:12.454517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:45.344 [2024-12-10 11:41:12.454529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.604 [2024-12-10 11:41:12.553985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:45.604 [2024-12-10 11:41:12.554035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:45.604 [2024-12-10 11:41:12.554052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:45.604 [2024-12-10 11:41:12.554065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.604 [2024-12-10 11:41:12.554204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:45.604 [2024-12-10 11:41:12.554219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:45.604 [2024-12-10 11:41:12.554231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:45.604 [2024-12-10 11:41:12.554243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.604 [2024-12-10 11:41:12.554294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:45.604 [2024-12-10 11:41:12.554326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:45.604 [2024-12-10 11:41:12.554337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:45.604 [2024-12-10 11:41:12.554348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.604 [2024-12-10 11:41:12.554474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:45.604 [2024-12-10 11:41:12.554489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:45.604 [2024-12-10 11:41:12.554507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:45.604 [2024-12-10 11:41:12.554519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.604 [2024-12-10 11:41:12.554558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:45.604 [2024-12-10 11:41:12.554572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:45.604 [2024-12-10 11:41:12.554588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:45.604 [2024-12-10 11:41:12.554599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.604 [2024-12-10 11:41:12.554665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:45.604 [2024-12-10 11:41:12.554684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:45.604 [2024-12-10 11:41:12.554695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:45.604 [2024-12-10 11:41:12.554706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.604 [2024-12-10 11:41:12.554762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:45.604 [2024-12-10 11:41:12.554780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:45.604 [2024-12-10 11:41:12.554791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:45.604 [2024-12-10 11:41:12.554802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:45.604 [2024-12-10 11:41:12.554966] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 401.085 ms, result 0 00:33:49.799 11:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:49.799 11:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:49.799 11:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:33:49.799 11:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:33:49.799 11:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:33:49.799 11:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:49.799 11:41:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:33:49.799 Remove shared memory files 00:33:49.799 11:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:49.799 11:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:33:49.799 11:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:33:49.799 11:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84346 00:33:49.799 11:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:49.799 11:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:33:49.799 00:33:49.799 real 1m29.636s 00:33:49.799 user 1m59.362s 00:33:49.799 sys 0m24.944s 00:33:49.799 11:41:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:49.799 11:41:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:49.799 ************************************ 00:33:49.799 END TEST ftl_upgrade_shutdown 00:33:49.799 ************************************ 00:33:49.799 11:41:16 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:33:49.799 11:41:16 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:49.799 11:41:16 ftl -- ftl/ftl.sh@14 -- # killprocess 76754 00:33:49.799 11:41:16 ftl -- common/autotest_common.sh@954 -- # '[' -z 76754 ']' 00:33:49.799 11:41:16 ftl -- common/autotest_common.sh@958 -- # kill -0 76754 00:33:49.799 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76754) - No such process 00:33:49.799 Process with pid 76754 is not found 00:33:49.799 11:41:16 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76754 is not found' 00:33:49.799 11:41:16 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:49.799 11:41:16 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:49.799 11:41:16 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84828 00:33:49.799 11:41:16 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84828 00:33:49.799 11:41:16 ftl -- common/autotest_common.sh@835 -- # '[' -z 84828 ']' 00:33:49.799 11:41:16 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.799 11:41:16 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:49.799 11:41:16 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.799 11:41:16 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:49.799 11:41:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:49.799 [2024-12-10 11:41:16.795060] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:33:49.799 [2024-12-10 11:41:16.795200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84828 ] 00:33:50.058 [2024-12-10 11:41:16.974056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.058 [2024-12-10 11:41:17.103067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.996 11:41:18 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:50.996 11:41:18 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:50.996 11:41:18 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:51.255 nvme0n1 00:33:51.255 11:41:18 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:51.255 11:41:18 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:51.255 11:41:18 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:51.514 11:41:18 ftl -- ftl/common.sh@28 -- # stores=1416ac0e-e00e-4495-8a49-64c237232957 00:33:51.514 11:41:18 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:51.514 11:41:18 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1416ac0e-e00e-4495-8a49-64c237232957 00:33:51.773 11:41:18 ftl -- ftl/ftl.sh@23 -- # killprocess 84828 00:33:51.773 11:41:18 ftl -- common/autotest_common.sh@954 -- # '[' -z 84828 ']' 00:33:51.773 11:41:18 ftl -- common/autotest_common.sh@958 -- # kill -0 84828 00:33:51.773 11:41:18 ftl -- common/autotest_common.sh@959 -- # uname 00:33:51.773 11:41:18 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:51.773 11:41:18 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84828 00:33:51.773 11:41:18 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:51.773 11:41:18 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:51.773 11:41:18 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84828' 00:33:51.773 killing process with pid 84828 00:33:51.773 11:41:18 ftl -- common/autotest_common.sh@973 -- # kill 84828 00:33:51.773 11:41:18 ftl -- common/autotest_common.sh@978 -- # wait 84828 00:33:54.310 11:41:21 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:54.570 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:54.829 Waiting for block devices as requested 00:33:54.829 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:55.089 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:55.089 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:55.348 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:00.695 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:00.695 11:41:27 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:00.695 11:41:27 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:00.695 Remove shared memory files 00:34:00.695 11:41:27 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:00.695 11:41:27 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:00.695 11:41:27 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:00.695 11:41:27 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:00.695 11:41:27 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:00.695 00:34:00.695 real 11m55.414s 00:34:00.695 user 14m30.951s 00:34:00.695 sys 1m34.098s 00:34:00.695 11:41:27 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:00.695 ************************************ 00:34:00.695 END TEST ftl 00:34:00.695 11:41:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:00.695 ************************************ 00:34:00.695 11:41:27 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:00.695 11:41:27 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:00.695 11:41:27 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:00.695 11:41:27 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:00.695 11:41:27 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:00.695 11:41:27 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:00.695 11:41:27 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:00.695 11:41:27 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:00.695 11:41:27 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:00.695 11:41:27 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:00.695 11:41:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:00.695 11:41:27 -- common/autotest_common.sh@10 -- # set +x 00:34:00.695 11:41:27 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:00.695 11:41:27 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:00.695 11:41:27 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:00.695 11:41:27 -- common/autotest_common.sh@10 -- # set +x 00:34:03.233 INFO: APP EXITING 00:34:03.233 INFO: killing all VMs 00:34:03.233 INFO: killing vhost app 00:34:03.233 INFO: EXIT DONE 00:34:03.233 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:03.801 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:03.801 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:03.801 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:03.801 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:04.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:04.937 Cleaning 00:34:04.937 Removing: /var/run/dpdk/spdk0/config 00:34:04.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:04.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:04.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:04.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:04.937 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:04.937 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:04.937 Removing: /var/run/dpdk/spdk0 00:34:04.937 Removing: /var/run/dpdk/spdk_pid57622 00:34:04.937 Removing: /var/run/dpdk/spdk_pid57862 00:34:04.937 Removing: /var/run/dpdk/spdk_pid58097 00:34:04.937 Removing: /var/run/dpdk/spdk_pid58201 00:34:04.937 Removing: /var/run/dpdk/spdk_pid58257 00:34:04.937 Removing: /var/run/dpdk/spdk_pid58385 00:34:04.937 Removing: /var/run/dpdk/spdk_pid58414 00:34:04.937 Removing: /var/run/dpdk/spdk_pid58624 00:34:04.937 Removing: /var/run/dpdk/spdk_pid58730 00:34:04.937 Removing: /var/run/dpdk/spdk_pid58843 00:34:04.937 Removing: /var/run/dpdk/spdk_pid58965 00:34:04.937 Removing: /var/run/dpdk/spdk_pid59073 00:34:04.937 Removing: /var/run/dpdk/spdk_pid59112 00:34:04.937 Removing: /var/run/dpdk/spdk_pid59149 00:34:04.937 Removing: /var/run/dpdk/spdk_pid59225 00:34:04.937 Removing: /var/run/dpdk/spdk_pid59353 00:34:04.937 Removing: /var/run/dpdk/spdk_pid59802 00:34:04.937 Removing: /var/run/dpdk/spdk_pid59885 00:34:04.937 Removing: /var/run/dpdk/spdk_pid59960 00:34:04.937 Removing: /var/run/dpdk/spdk_pid59976 00:34:04.937 Removing: /var/run/dpdk/spdk_pid60137 00:34:04.937 Removing: /var/run/dpdk/spdk_pid60153 00:34:04.937 Removing: /var/run/dpdk/spdk_pid60307 00:34:04.937 Removing: /var/run/dpdk/spdk_pid60328 00:34:04.937 Removing: /var/run/dpdk/spdk_pid60398 00:34:04.937 Removing: /var/run/dpdk/spdk_pid60416 00:34:04.937 Removing: /var/run/dpdk/spdk_pid60480 00:34:04.937 Removing: /var/run/dpdk/spdk_pid60503 00:34:04.937 Removing: /var/run/dpdk/spdk_pid60704 00:34:04.937 Removing: /var/run/dpdk/spdk_pid60735 00:34:04.937 Removing: /var/run/dpdk/spdk_pid60824 00:34:04.937 Removing: /var/run/dpdk/spdk_pid61019 00:34:04.937 Removing: /var/run/dpdk/spdk_pid61120 00:34:04.937 Removing: /var/run/dpdk/spdk_pid61162 00:34:04.937 Removing: /var/run/dpdk/spdk_pid61617 00:34:04.937 Removing: /var/run/dpdk/spdk_pid61721 00:34:04.937 Removing: /var/run/dpdk/spdk_pid61835 00:34:04.937 Removing: /var/run/dpdk/spdk_pid61894 00:34:04.937 Removing: /var/run/dpdk/spdk_pid61914 00:34:04.937 Removing: /var/run/dpdk/spdk_pid61999 00:34:04.937 Removing: /var/run/dpdk/spdk_pid62647 00:34:04.937 Removing: /var/run/dpdk/spdk_pid62695 00:34:04.937 Removing: /var/run/dpdk/spdk_pid63189 00:34:04.937 Removing: /var/run/dpdk/spdk_pid63287 00:34:04.937 Removing: /var/run/dpdk/spdk_pid63402 00:34:04.937 Removing: /var/run/dpdk/spdk_pid63455 00:34:04.937 Removing: /var/run/dpdk/spdk_pid63486 00:34:04.937 Removing: /var/run/dpdk/spdk_pid63511 00:34:05.197 Removing: /var/run/dpdk/spdk_pid65403 00:34:05.197 Removing: /var/run/dpdk/spdk_pid65551 00:34:05.197 Removing: /var/run/dpdk/spdk_pid65555 00:34:05.197 Removing: /var/run/dpdk/spdk_pid65578 00:34:05.197 Removing: /var/run/dpdk/spdk_pid65617 00:34:05.197 Removing: /var/run/dpdk/spdk_pid65621 00:34:05.197 Removing: /var/run/dpdk/spdk_pid65633 00:34:05.197 Removing: /var/run/dpdk/spdk_pid65683 00:34:05.197 Removing: /var/run/dpdk/spdk_pid65687 00:34:05.197 Removing: /var/run/dpdk/spdk_pid65699 00:34:05.197 Removing: /var/run/dpdk/spdk_pid65744 00:34:05.197 Removing: /var/run/dpdk/spdk_pid65748 00:34:05.197 Removing: /var/run/dpdk/spdk_pid65760 00:34:05.197 Removing: /var/run/dpdk/spdk_pid67181 00:34:05.197 Removing: /var/run/dpdk/spdk_pid67289 00:34:05.197 Removing: /var/run/dpdk/spdk_pid68731 00:34:05.197 Removing: /var/run/dpdk/spdk_pid70486 00:34:05.197 Removing: /var/run/dpdk/spdk_pid70567 00:34:05.197 Removing: /var/run/dpdk/spdk_pid70644 00:34:05.197 Removing: /var/run/dpdk/spdk_pid70759 00:34:05.197 Removing: /var/run/dpdk/spdk_pid70851 00:34:05.197 Removing: /var/run/dpdk/spdk_pid70952 00:34:05.197 Removing: /var/run/dpdk/spdk_pid71032 00:34:05.197 Removing: /var/run/dpdk/spdk_pid71114 00:34:05.197 Removing: /var/run/dpdk/spdk_pid71229 00:34:05.197 Removing: /var/run/dpdk/spdk_pid71321 00:34:05.197 Removing: /var/run/dpdk/spdk_pid71422 00:34:05.197 Removing: /var/run/dpdk/spdk_pid71502 00:34:05.197 Removing: /var/run/dpdk/spdk_pid71577 00:34:05.197 Removing: /var/run/dpdk/spdk_pid71692 00:34:05.197 Removing: /var/run/dpdk/spdk_pid71784 00:34:05.198 Removing: /var/run/dpdk/spdk_pid71885 00:34:05.198 Removing: /var/run/dpdk/spdk_pid71966 00:34:05.198 Removing: /var/run/dpdk/spdk_pid72047 00:34:05.198 Removing: /var/run/dpdk/spdk_pid72151 00:34:05.198 Removing: /var/run/dpdk/spdk_pid72248 00:34:05.198 Removing: /var/run/dpdk/spdk_pid72348 00:34:05.198 Removing: /var/run/dpdk/spdk_pid72433 00:34:05.198 Removing: /var/run/dpdk/spdk_pid72514 00:34:05.198 Removing: /var/run/dpdk/spdk_pid72591 00:34:05.198 Removing: /var/run/dpdk/spdk_pid72674 00:34:05.198 Removing: /var/run/dpdk/spdk_pid72784 00:34:05.198 Removing: /var/run/dpdk/spdk_pid72875 00:34:05.198 Removing: /var/run/dpdk/spdk_pid72974 00:34:05.198 Removing: /var/run/dpdk/spdk_pid73058 00:34:05.198 Removing: /var/run/dpdk/spdk_pid73132 00:34:05.198 Removing: /var/run/dpdk/spdk_pid73212 00:34:05.198 Removing: /var/run/dpdk/spdk_pid73286 00:34:05.198 Removing: /var/run/dpdk/spdk_pid73395 00:34:05.198 Removing: /var/run/dpdk/spdk_pid73486 00:34:05.198 Removing: /var/run/dpdk/spdk_pid73641 00:34:05.198 Removing: /var/run/dpdk/spdk_pid73937 00:34:05.198 Removing: /var/run/dpdk/spdk_pid73974 00:34:05.457 Removing: /var/run/dpdk/spdk_pid74429 00:34:05.457 Removing: /var/run/dpdk/spdk_pid74613 00:34:05.457 Removing: /var/run/dpdk/spdk_pid74717 00:34:05.457 Removing: /var/run/dpdk/spdk_pid74834 00:34:05.457 Removing: /var/run/dpdk/spdk_pid74894 00:34:05.457 Removing: /var/run/dpdk/spdk_pid74919 00:34:05.457 Removing: /var/run/dpdk/spdk_pid75210 00:34:05.457 Removing: /var/run/dpdk/spdk_pid75284 00:34:05.457 Removing: /var/run/dpdk/spdk_pid75370 00:34:05.457 Removing: /var/run/dpdk/spdk_pid75798 00:34:05.457 Removing: /var/run/dpdk/spdk_pid75943 00:34:05.457 Removing: /var/run/dpdk/spdk_pid76754 00:34:05.457 Removing: /var/run/dpdk/spdk_pid76902 00:34:05.457 Removing: /var/run/dpdk/spdk_pid77098 00:34:05.457 Removing: /var/run/dpdk/spdk_pid77207 00:34:05.457 Removing: /var/run/dpdk/spdk_pid77566 00:34:05.457 Removing: /var/run/dpdk/spdk_pid77859 00:34:05.457 Removing: /var/run/dpdk/spdk_pid78264 00:34:05.457 Removing: /var/run/dpdk/spdk_pid78498 00:34:05.457 Removing: /var/run/dpdk/spdk_pid78658 00:34:05.457 Removing: /var/run/dpdk/spdk_pid78727 00:34:05.457 Removing: /var/run/dpdk/spdk_pid78865 00:34:05.457 Removing: /var/run/dpdk/spdk_pid78901 00:34:05.457 Removing: /var/run/dpdk/spdk_pid78965 00:34:05.457 Removing: /var/run/dpdk/spdk_pid79168 00:34:05.457 Removing: /var/run/dpdk/spdk_pid79410 00:34:05.457 Removing: /var/run/dpdk/spdk_pid79887 00:34:05.457 Removing: /var/run/dpdk/spdk_pid80369 00:34:05.457 Removing: /var/run/dpdk/spdk_pid80849 00:34:05.457 Removing: /var/run/dpdk/spdk_pid81407 00:34:05.457 Removing: /var/run/dpdk/spdk_pid81555 00:34:05.457 Removing: /var/run/dpdk/spdk_pid81648 00:34:05.457 Removing: /var/run/dpdk/spdk_pid82316 00:34:05.457 Removing: /var/run/dpdk/spdk_pid82380 00:34:05.457 Removing: /var/run/dpdk/spdk_pid82888 00:34:05.458 Removing: /var/run/dpdk/spdk_pid83264 00:34:05.458 Removing: /var/run/dpdk/spdk_pid83781 00:34:05.458 Removing: /var/run/dpdk/spdk_pid83909 00:34:05.458 Removing: /var/run/dpdk/spdk_pid83967 00:34:05.458 Removing: /var/run/dpdk/spdk_pid84033 00:34:05.458 Removing: /var/run/dpdk/spdk_pid84092 00:34:05.458 Removing: /var/run/dpdk/spdk_pid84157 00:34:05.458 Removing: /var/run/dpdk/spdk_pid84346 00:34:05.458 Removing: /var/run/dpdk/spdk_pid84451 00:34:05.458 Removing: /var/run/dpdk/spdk_pid84518 00:34:05.458 Removing: /var/run/dpdk/spdk_pid84588 00:34:05.458 Removing: /var/run/dpdk/spdk_pid84627 00:34:05.458 Removing: /var/run/dpdk/spdk_pid84695 00:34:05.458 Removing: /var/run/dpdk/spdk_pid84828 00:34:05.458 Clean 00:34:05.717 11:41:32 -- common/autotest_common.sh@1453 -- # return 0 00:34:05.717 11:41:32 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:05.717 11:41:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:05.717 11:41:32 -- common/autotest_common.sh@10 -- # set +x 00:34:05.717 11:41:32 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:05.717 11:41:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:05.717 11:41:32 -- common/autotest_common.sh@10 -- # set +x 00:34:05.717 11:41:32 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:05.717 11:41:32 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:05.717 11:41:32 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:05.717 11:41:32 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:05.717 11:41:32 -- spdk/autotest.sh@398 -- # hostname 00:34:05.717 11:41:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:05.976 geninfo: WARNING: invalid characters removed from testname! 00:34:32.536 11:41:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:34.443 11:42:01 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:36.977 11:42:03 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:38.883 11:42:05 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:41.417 11:42:07 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:43.322 11:42:10 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:45.859 11:42:12 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:45.859 11:42:12 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:45.859 11:42:12 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:45.859 11:42:12 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:45.859 11:42:12 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:45.860 11:42:12 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:45.860 + [[ -n 5251 ]] 00:34:45.860 + sudo kill 5251 00:34:45.870 [Pipeline] } 00:34:45.887 [Pipeline] // timeout 00:34:45.892 [Pipeline] } 00:34:45.910 [Pipeline] // stage 00:34:45.916 [Pipeline] } 00:34:45.932 [Pipeline] // catchError 00:34:45.940 [Pipeline] stage 00:34:45.942 [Pipeline] { (Stop VM) 00:34:45.953 [Pipeline] sh 00:34:46.290 + vagrant halt 00:34:49.585 ==> default: Halting domain... 00:34:56.172 [Pipeline] sh 00:34:56.454 + vagrant destroy -f 00:34:58.989 ==> default: Removing domain... 00:34:59.570 [Pipeline] sh 00:34:59.853 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:59.863 [Pipeline] } 00:34:59.877 [Pipeline] // stage 00:34:59.882 [Pipeline] } 00:34:59.896 [Pipeline] // dir 00:34:59.902 [Pipeline] } 00:34:59.916 [Pipeline] // wrap 00:34:59.922 [Pipeline] } 00:34:59.934 [Pipeline] // catchError 00:34:59.944 [Pipeline] stage 00:34:59.946 [Pipeline] { (Epilogue) 00:34:59.958 [Pipeline] sh 00:35:00.242 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:05.529 [Pipeline] catchError 00:35:05.531 [Pipeline] { 00:35:05.544 [Pipeline] sh 00:35:05.827 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:05.827 Artifacts sizes are good 00:35:05.837 [Pipeline] } 00:35:05.852 [Pipeline] // catchError 00:35:05.862 [Pipeline] archiveArtifacts 00:35:05.870 Archiving artifacts 00:35:05.995 [Pipeline] cleanWs 00:35:06.014 [WS-CLEANUP] Deleting project workspace... 00:35:06.014 [WS-CLEANUP] Deferred wipeout is used... 00:35:06.037 [WS-CLEANUP] done 00:35:06.039 [Pipeline] } 00:35:06.054 [Pipeline] // stage 00:35:06.059 [Pipeline] } 00:35:06.072 [Pipeline] // node 00:35:06.077 [Pipeline] End of Pipeline 00:35:06.120 Finished: SUCCESS